NASA Astrophysics Data System (ADS)
Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi
2013-02-01
A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.
Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.
2003-04-01
A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}
ERIC Educational Resources Information Center
Adachi, Kohei
2009-01-01
In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…
An Introductory Application of Principal Components to Cricket Data
ERIC Educational Resources Information Center
Manage, Ananda B. W.; Scariano, Stephen M.
2013-01-01
Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…
Combination of PCA and LORETA for sources analysis of ERP data: an emotional processing study
NASA Astrophysics Data System (ADS)
Hu, Jin; Tian, Jie; Yang, Lei; Pan, Xiaohong; Liu, Jiangang
2006-03-01
The purpose of this paper is to study spatiotemporal patterns of neuronal activity in emotional processing by analysis of ERP data. 108 pictures (categorized as positive, negative and neutral) were presented to 24 healthy, right-handed subjects while 128-channel EEG data were recorded. An analysis of two steps was applied to the ERP data. First, principal component analysis was performed to obtain significant ERP components. Then LORETA was applied to each component to localize their brain sources. The first six principal components were extracted, each of which showed different spatiotemporal patterns of neuronal activity. The results agree with other emotional study by fMRI or PET. The combination of PCA and LORETA can be used to analyze spatiotemporal patterns of ERP data in emotional processing.
HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2005-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
Hybrid Neural Network and Support Vector Machine Method for Optimization
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2007-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
2016-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA JOINT APPLIED PROJECT AN ANALYSIS OF THE ORGANIZATIONAL STRUCTURE OF REDSTONE...AND SUBTITLE AN ANALYSIS OF THE ORGANIZATIONAL STRUCTURE OF REDSTONE TEST CENTER’S ENVIRONMENTAL AND COMPONENTS TEST DIRECTORATE WITH REGARD TO...provides an analysis of the organizational structure of Redstone Test Center’s Environment and Components Test Directorate, with specific regard to
14 CFR Appendix E to Part 417 - Flight Termination System Testing and Analysis
Code of Federal Regulations, 2012 CFR
2012-01-01
... contains requirements for tests and analyses that apply to all flight termination systems and the... termination system components that satisfy the requirements of this appendix. (b) Component tests and analyses. A component must satisfy each test or analysis required by any table of this appendix to demonstrate...
14 CFR Appendix E to Part 417 - Flight Termination System Testing and Analysis
Code of Federal Regulations, 2010 CFR
2010-01-01
... contains requirements for tests and analyses that apply to all flight termination systems and the... termination system components that satisfy the requirements of this appendix. (b) Component tests and analyses. A component must satisfy each test or analysis required by any table of this appendix to demonstrate...
14 CFR Appendix E to Part 417 - Flight Termination System Testing and Analysis
Code of Federal Regulations, 2013 CFR
2013-01-01
... contains requirements for tests and analyses that apply to all flight termination systems and the... termination system components that satisfy the requirements of this appendix. (b) Component tests and analyses. A component must satisfy each test or analysis required by any table of this appendix to demonstrate...
14 CFR Appendix E to Part 417 - Flight Termination System Testing and Analysis
Code of Federal Regulations, 2014 CFR
2014-01-01
... contains requirements for tests and analyses that apply to all flight termination systems and the... termination system components that satisfy the requirements of this appendix. (b) Component tests and analyses. A component must satisfy each test or analysis required by any table of this appendix to demonstrate...
14 CFR Appendix E to Part 417 - Flight Termination System Testing and Analysis
Code of Federal Regulations, 2011 CFR
2011-01-01
... contains requirements for tests and analyses that apply to all flight termination systems and the... termination system components that satisfy the requirements of this appendix. (b) Component tests and analyses. A component must satisfy each test or analysis required by any table of this appendix to demonstrate...
Feng, Xiao-Liang; He, Yun-biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei
2013-01-01
Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria. PMID:24286016
Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei
2013-01-01
Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
NASA Astrophysics Data System (ADS)
Chen, Zhe; Parker, B. J.; Feng, D. D.; Fulton, R.
2004-10-01
In this paper, we compare various temporal analysis schemes applied to dynamic PET for improved quantification, image quality and temporal compression purposes. We compare an optimal sampling schedule (OSS) design, principal component analysis (PCA) applied in the image domain, and principal component analysis applied in the sinogram domain; for region-of-interest quantification, sinogram-domain PCA is combined with the Huesman algorithm to quantify from the sinograms directly without requiring reconstruction of all PCA channels. Using a simulated phantom FDG brain study and three clinical studies, we evaluate the fidelity of the compressed data for estimation of local cerebral metabolic rate of glucose by a four-compartment model. Our results show that using a noise-normalized PCA in the sinogram domain gives similar compression ratio and quantitative accuracy to OSS, but with substantially better precision. These results indicate that sinogram-domain PCA for dynamic PET can be a useful preprocessing stage for PET compression and quantification applications.
Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N
2018-02-01
The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not highlight the most influencing variables for each separation, whereas the ICA Loadings highlighted the same variables as did CCA. This study shows the potential of CCA for the extraction of pertinent information from a data matrix, using a procedure based on an original optimisation criterion, to produce results that are complementary, and in some cases may be superior, to those of PCA and ICA. Copyright © 2017 Elsevier B.V. All rights reserved.
Karasawa, N; Mitsutake, A; Takano, H
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n]polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μs molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
NASA Astrophysics Data System (ADS)
Karasawa, N.; Mitsutake, A.; Takano, H.
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
Stress analysis under component relative interference fit
NASA Technical Reports Server (NTRS)
Taylor, C. M.
1978-01-01
Finite-element computer program enables analysis of distortions and stresses occurring in components having relative interference. Program restricts itself to simple elements and axisymmetric loading situations. External inertial and thermal loads may be applied in addition to forces arising from interference conditions.
Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions
NASA Astrophysics Data System (ADS)
BoŻek, Piotr
2018-03-01
The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.
Brackney, Larry; Parker, Andrew; Long, Nicholas; Metzger, Ian; Dean, Jesse; Lisell, Lars
2016-04-12
A building energy analysis system includes a building component library configured to store a plurality of building components, a modeling tool configured to access the building component library and create a building model of a building under analysis using building spatial data and using selected building components of the plurality of building components stored in the building component library, a building analysis engine configured to operate the building model and generate a baseline energy model of the building under analysis and further configured to apply one or more energy conservation measures to the baseline energy model in order to generate one or more corresponding optimized energy models, and a recommendation tool configured to assess the one or more optimized energy models against the baseline energy model and generate recommendations for substitute building components or modifications.
NASA Technical Reports Server (NTRS)
Goldstein, Arthur W; Alpert, Sumner; Beede, William; Kovach, Karl
1949-01-01
In order to understand the operation and the interaction of jet-engine components during engine operation and to determine how component characteristics may be used to compute engine performance, a method to analyze and to estimate performance of such engines was devised and applied to the study of the characteristics of a research turbojet engine built for this investigation. An attempt was made to correlate turbine performance obtained from engine experiments with that obtained by the simpler procedure of separately calibrating the turbine with cold air as a driving fluid in order to investigate the applicability of component calibration. The system of analysis was also applied to prediction of the engine and component performance with assumed modifications of the burner and bearing characteristics, to prediction of component and engine operation during engine acceleration, and to estimates of the performance of the engine and the components when the exhaust gas was used to drive a power turbine.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
NASA Technical Reports Server (NTRS)
Williams, D. L.; Borden, F. Y.
1977-01-01
Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.
ERIC Educational Resources Information Center
Mellard, Daryl F.; Anthony, Jason L.; Woods, Kari L.
2012-01-01
This study extends the literature on the component skills involved in oral reading fluency. Dominance analysis was applied to assess the relative importance of seven reading-related component skills in the prediction of the oral reading fluency of 272 adult literacy learners. The best predictors of oral reading fluency when text difficulty was…
Model reduction by weighted Component Cost Analysis
NASA Technical Reports Server (NTRS)
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Token Economy: A Systematic Review of Procedural Descriptions.
Ivy, Jonathan W; Meindl, James N; Overley, Eric; Robson, Kristen M
2017-09-01
The token economy is a well-established and widely used behavioral intervention. A token economy is comprised of six procedural components: the target response(s), a token that functions as a conditioned reinforcer, backup reinforcers, and three interconnected schedules of reinforcement. Despite decades of applied research, the extent to which the procedures of a token economy are described in complete and replicable detail has not been evaluated. Given the inherent complexity of a token economy, an analysis of the procedural descriptions may benefit future token economy research and practice. Articles published between 2000 and 2015 that included implementation of a token economy within an applied setting were identified and reviewed with a focus on evaluating the thoroughness of procedural descriptions. The results show that token economy components are regularly omitted or described in vague terms. Of the articles included in this analysis, only 19% (18 of 96 articles reviewed) included replicable and complete descriptions of all primary components. Missing or vague component descriptions could negatively affect future research or applied practice. Recommendations are provided to improve component descriptions.
Concept analysis of culture applied to nursing.
Marzilli, Colleen
2014-01-01
Culture is an important concept, especially when applied to nursing. A concept analysis of culture is essential to understanding the meaning of the word. This article applies Rodgers' (2000) concept analysis template and provides a definition of the word culture as it applies to nursing practice. This article supplies examples of the concept of culture to aid the reader in understanding its application to nursing and includes a case study demonstrating components of culture that must be respected and included when providing health care.
Classification of fMRI resting-state maps using machine learning techniques: A comparative study
NASA Astrophysics Data System (ADS)
Gallos, Ioannis; Siettos, Constantinos
2017-11-01
We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2017-06-01
Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
Multivariate Statistical Analysis of MSL APXS Bulk Geochemical Data
NASA Astrophysics Data System (ADS)
Hamilton, V. E.; Edwards, C. S.; Thompson, L. M.; Schmidt, M. E.
2014-12-01
We apply cluster and factor analyses to bulk chemical data of 130 soil and rock samples measured by the Alpha Particle X-ray Spectrometer (APXS) on the Mars Science Laboratory (MSL) rover Curiosity through sol 650. Multivariate approaches such as principal components analysis (PCA), cluster analysis, and factor analysis compliment more traditional approaches (e.g., Harker diagrams), with the advantage of simultaneously examining the relationships between multiple variables for large numbers of samples. Principal components analysis has been applied with success to APXS, Pancam, and Mössbauer data from the Mars Exploration Rovers. Factor analysis and cluster analysis have been applied with success to thermal infrared (TIR) spectral data of Mars. Cluster analyses group the input data by similarity, where there are a number of different methods for defining similarity (hierarchical, density, distribution, etc.). For example, without any assumptions about the chemical contributions of surface dust, preliminary hierarchical and K-means cluster analyses clearly distinguish the physically adjacent rock targets Windjana and Stephen as being distinctly different than lithologies observed prior to Curiosity's arrival at The Kimberley. In addition, they are separated from each other, consistent with chemical trends observed in variation diagrams but without requiring assumptions about chemical relationships. We will discuss the variation in cluster analysis results as a function of clustering method and pre-processing (e.g., log transformation, correction for dust cover) and implications for interpreting chemical data. Factor analysis shares some similarities with PCA, and examines the variability among observed components of a dataset so as to reveal variations attributable to unobserved components. Factor analysis has been used to extract the TIR spectra of components that are typically observed in mixtures and only rarely in isolation; there is the potential for similar results with data from APXS. These techniques offer new ways to understand the chemical relationships between the materials interrogated by Curiosity, and potentially their relation to materials observed by APXS instruments on other landed missions.
Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task
NASA Technical Reports Server (NTRS)
Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.
1978-01-01
Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-11-04
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
NASA Technical Reports Server (NTRS)
Sopher, R.; Hallock, D. W.
1985-01-01
A time history analysis for rotorcraft dynamics based on dynamical substructures, and nonstructural mathematical and aerodynamic components is described. The analysis is applied to predict helicopter ground resonance and response to rotor damage. Other applications illustrate the stability and steady vibratory response of stopped and gimballed rotors, representative of new technology. Desirable attributes expected from modern codes are realized, although the analysis does not employ a complete set of techniques identified for advanced software. The analysis is able to handle a comprehensive set of steady state and stability problems with a small library of components.
Opportunities for Applied Behavior Analysis in the Total Quality Movement.
ERIC Educational Resources Information Center
Redmon, William K.
1992-01-01
This paper identifies critical components of recent organizational quality improvement programs and specifies how applied behavior analysis can contribute to quality technology. Statistical Process Control and Total Quality Management approaches are compared, and behavior analysts are urged to build their research base and market behavior change…
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio
2015-03-01
A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.
Exergo-Economic Analysis of an Experimental Aircraft Turboprop Engine Under Low Torque Condition
NASA Astrophysics Data System (ADS)
Atilgan, Ramazan; Turan, Onder; Aydin, Hakan
Exergo-economic analysis is an unique combination of exergy analysis and cost analysis conducted at the component level. In exergo-economic analysis, cost of each exergy stream is determined. Inlet and outlet exergy streams of the each component are associated to a monetary cost. This is essential to detect cost-ineffective processes and identify technical options which could improve the cost effectiveness of the overall energy system. In this study, exergo-economic analysis is applied to an aircraft turboprop engine. Analysis is based on experimental values at low torque condition (240 N m). Main components of investigated turboprop engine are the compressor, the combustor, the gas generator turbine, the free power turbine and the exhaust. Cost balance equations have been formed for all components individually and exergo-economic parameters including cost rates and unit exergy costs have been calculated for each component.
The Natural Helmholtz-Hodge Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatia, H.
nHHD is a C++ library to decompose a flow field into three components exhibiting specific types of behaviors. These components allow more targeted analysis of flow behavior and can be applied to a variety of application areas.
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components
NASA Technical Reports Server (NTRS)
Ko, William L.
2005-01-01
The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.
NASA Technical Reports Server (NTRS)
Klein, L. R.
1974-01-01
The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.
Independent component analysis decomposition of hospital emergency department throughput measures
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Henry
2016-05-01
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Multivariate analysis for scanning tunneling spectroscopy data
NASA Astrophysics Data System (ADS)
Yamanishi, Junsuke; Iwase, Shigeru; Ishida, Nobuyuki; Fujita, Daisuke
2018-01-01
We applied principal component analysis (PCA) to two-dimensional tunneling spectroscopy (2DTS) data obtained on a Si(111)-(7 × 7) surface to explore the effectiveness of multivariate analysis for interpreting 2DTS data. We demonstrated that several components that originated mainly from specific atoms at the Si(111)-(7 × 7) surface can be extracted by PCA. Furthermore, we showed that hidden components in the tunneling spectra can be decomposed (peak separation), which is difficult to achieve with normal 2DTS analysis without the support of theoretical calculations. Our analysis showed that multivariate analysis can be an additional powerful way to analyze 2DTS data and extract hidden information from a large amount of spectroscopic data.
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam
2011-05-01
The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Independent component analysis for automatic note extraction from musical trills
NASA Astrophysics Data System (ADS)
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
ERIC Educational Resources Information Center
Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad
2017-01-01
This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Machine learning of frustrated classical spin models. I. Principal component analysis
NASA Astrophysics Data System (ADS)
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
Investigation of domain walls in PPLN by confocal raman microscopy and PCA analysis
NASA Astrophysics Data System (ADS)
Shur, Vladimir Ya.; Zelenovskiy, Pavel; Bourson, Patrice
2017-07-01
Confocal Raman microscopy (CRM) is a powerful tool for investigation of ferroelectric domains. Mechanical stresses and electric fields existed in the vicinity of neutral and charged domain walls modify frequency, intensity and width of spectral lines [1], thus allowing to visualize micro- and nanodomain structures both at the surface and in the bulk of the crystal [2,3]. Stresses and fields are naturally coupled in ferroelectrics due to inverse piezoelectric effect and hardly can be separated in Raman spectra. PCA is a powerful statistical method for analysis of large data matrix providing a set of orthogonal variables, called principal components (PCs). PCA is widely used for classification of experimental data, for example, in crystallization experiments, for detection of small amounts of components in solid mixtures etc. [4,5]. In Raman spectroscopy PCA was applied for analysis of phase transitions and provided critical pressure with good accuracy [6]. In the present work we for the first time applied Principal Component Analysis (PCA) method for analysis of Raman spectra measured in periodically poled lithium niobate (PPLN). We found that principal components demonstrate different sensitivity to mechanical stresses and electric fields in the vicinity of the domain walls. This allowed us to separately visualize spatial distribution of fields and electric fields at the surface and in the bulk of PPLN.
Mjørud, Marit; Kirkevold, Marit; Røsvik, Janne; Engedal, Knut
2014-01-01
To investigate which factors the Quality of Life in Late-Stage Dementia (QUALID) scale holds when used among people with dementia (pwd) in nursing homes and to find out how the symptom load varies across the different severity levels of dementia. We included 661 pwd [mean age ± SD, 85.3 ± 8.6 years; 71.4% women]. The QUALID and the Clinical Dementia Rating (CDR) scale were applied. A principal component analysis (PCA) with varimax rotation and Kaiser normalization was applied to test the factor structure. Nonparametric analyses were applied to examine differences of symptom load across the three CDR groups. The mean QUALID score was 21.5 (±7.1), and the CDR scores of the three groups were 1 in 22.5%, 2 in 33.6% and 3 in 43.9%. The results of the statistical measures employed were the following: Crohnbach's α of QUALID, 0.74; Bartlett's test of sphericity, p <0.001; the Kaiser-Meyer-Olkin measure, 0.77. The PCA analysis resulted in three components accounting for 53% of the variance. The first component was 'tension' ('facial expression of discomfort', 'appears physically uncomfortable', 'verbalization suggests discomfort', 'being irritable and aggressive', 'appears calm', Crohnbach's α = 0.69), the second was 'well-being' ('smiles', 'enjoys eating', 'enjoys touching/being touched', 'enjoys social interaction', Crohnbach's α = 0.62) and the third was 'sadness' ('appears sad', 'cries', 'facial expression of discomfort', Crohnbach's α 0.65). The mean score on the components 'tension' and 'well-being' increased significantly with increasing severity levels of dementia. Three components of quality of life (qol) were identified. Qol decreased with increasing severity of dementia. © 2013 S. Karger AG, Basel.
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Feldman, Sandra C.
1987-01-01
Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.
Revealing the underlying drivers of disaster risk: a global analysis
NASA Astrophysics Data System (ADS)
Peduzzi, Pascal
2017-04-01
Disasters events are perfect examples of compound events. Disaster risk lies at the intersection of several independent components such as hazard, exposure and vulnerability. Understanding the weight of each component requires extensive standardisation. Here, I show how footprints of past disastrous events were generated using GIS modelling techniques and used for extracting population and economic exposures based on distribution models. Using past event losses, it was possible to identify and quantify a wide range of socio-politico-economic drivers associated with human vulnerability. The analysis was applied to about nine thousand individual past disastrous events covering earthquakes, floods and tropical cyclones. Using a multiple regression analysis on these individual events it was possible to quantify each risk component and assess how vulnerability is influenced by various hazard intensities. The results show that hazard intensity, exposure, poverty, governance as well as other underlying factors (e.g. remoteness) can explain the magnitude of past disasters. Analysis was also performed to highlight the role of future trends in population and climate change and how this may impacts exposure to tropical cyclones in the future. GIS models combined with statistical multiple regression analysis provided a powerful methodology to identify, quantify and model disaster risk taking into account its various components. The same methodology can be applied to various types of risk at local to global scale. This method was applied and developed for the Global Risk Analysis of the Global Assessment Report on Disaster Risk Reduction (GAR). It was first applied on mortality risk in GAR 2009 and GAR 2011. New models ranging from global assets exposure and global flood hazard models were also recently developed to improve the resolution of the risk analysis and applied through CAPRA software to provide probabilistic economic risk assessments such as Average Annual Losses (AAL) and Probable Maximum Losses (PML) in GAR 2013 and GAR 2015. In parallel similar methodologies were developed to highlitght the role of ecosystems for Climate Change Adaptation (CCA) and Disaster Risk Reduction (DRR). New developments may include slow hazards (such as e.g. soil degradation and droughts), natech hazards (by intersecting with georeferenced critical infrastructures) The various global hazard, exposure and risk models can be visualized and download through the PREVIEW Global Risk Data Platform.
NASA Astrophysics Data System (ADS)
Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.
2017-06-01
The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.
Artifacts and noise removal in electrocardiograms using independent component analysis.
Chawla, M P S; Verma, H K; Kumar, Vinod
2008-09-26
Independent component analysis (ICA) is a novel technique capable of separating independent components from electrocardiogram (ECG) complex signals. The purpose of this analysis is to evaluate the effectiveness of ICA in removing artifacts and noise from ECG recordings. ICA is applied to remove artifacts and noise in ECG segments of either an individual ECG CSE data base file or all files. The reconstructed ECGs are compared with the original ECG signal. For the four special cases discussed, the R-Peak magnitudes of the CSE data base ECG waveforms before and after applying ICA are also found. In the results, it is shown that in most of the cases, the percentage error in reconstruction is very small. The results show that there is a significant improvement in signal quality, i.e. SNR. All the ECG recording cases dealt showed an improved ECG appearance after the use of ICA. This establishes the efficacy of ICA in elimination of noise and artifacts in electrocardiograms.
NASA Astrophysics Data System (ADS)
Luce, R.; Hildebrandt, P.; Kuhlmann, U.; Liesen, J.
2016-09-01
The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for non-negative matrix factorization which is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed.
Qin, Kunming; Wang, Bin; Li, Weidong; Cai, Hao; Chen, Danni; Liu, Xiao; Yin, Fangzhou; Cai, Baochang
2015-05-01
In traditional Chinese medicine, raw and processed herbs are used to treat different diseases. Suitable quality assessment methods are crucial for the discrimination between raw and processed herbs. The dried fruit of Arctium lappa L. and their processed products are widely used in traditional Chinese medicine, yet their therapeutic effects are different. In this study, a novel strategy using high-performance liquid chromatography and diode array detection coupled with multivariate statistical analysis to rapidly explore raw and processed Arctium lappa L. was proposed and validated. Four main components in a total of 30 batches of raw and processed Fructus Arctii samples were analyzed, and ten characteristic peaks were identified in the fingerprint common pattern. Furthermore, similarity evaluation, principal component analysis, and hierachical cluster analysis were applied to demonstrate the distinction. The results suggested that the relative amounts of the chemical components of raw and processed Fructus Arctii samples are different. This new method has been successfully applied to detect the raw and processed Fructus Arctii in marketed herbal medicinal products. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CO Component Estimation Based on the Independent Component Analysis
NASA Astrophysics Data System (ADS)
Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo
2014-01-01
Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.
CO component estimation based on the independent component analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki
2014-01-01
Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Mueller, Daniela; Ferrão, Marco Flôres; Marder, Luciano; da Costa, Adilson Ben; de Cássia de Souza Schneider, Rosana
2013-01-01
The main objective of this study was to use infrared spectroscopy to identify vegetable oils used as raw material for biodiesel production and apply multivariate analysis to the data. Six different vegetable oil sources—canola, cotton, corn, palm, sunflower and soybeans—were used to produce biodiesel batches. The spectra were acquired by Fourier transform infrared spectroscopy using a universal attenuated total reflectance sensor (FTIR-UATR). For the multivariate analysis principal component analysis (PCA), hierarchical cluster analysis (HCA), interval principal component analysis (iPCA) and soft independent modeling of class analogy (SIMCA) were used. The results indicate that is possible to develop a methodology to identify vegetable oils used as raw material in the production of biodiesel by FTIR-UATR applying multivariate analysis. It was also observed that the iPCA found the best spectral range for separation of biodiesel batches using FTIR-UATR data, and with this result, the SIMCA method classified 100% of the soybean biodiesel samples. PMID:23539030
Histogram contrast analysis and the visual segregation of IID textures.
Chubb, C; Econopouly, J; Landy, M S
1994-09-01
A new psychophysical methodology is introduced, histogram contrast analysis, that allows one to measure stimulus transformations, f, used by the visual system to draw distinctions between different image regions. The method involves the discrimination of images constructed by selecting texture micropatterns randomly and independently (across locations) on the basis of a given micropattern histogram. Different components of f are measured by use of different component functions to modulate the micropattern histogram until the resulting textures are discriminable. When no discrimination threshold can be obtained for a given modulating component function, a second titration technique may be used to measure the contribution of that component to f. The method includes several strong tests of its own assumptions. An example is given of the method applied to visual textures composed of small, uniform squares with randomly chosen gray levels. In particular, for a fixed mean gray level mu and a fixed gray-level variance sigma 2, histogram contrast analysis is used to establish that the class S of all textures composed of small squares with jointly independent, identically distributed gray levels with mean mu and variance sigma 2 is perceptually elementary in the following sense: there exists a single, real-valued function f S of gray level, such that two textures I and J in S are discriminable only if the average value of f S applied to the gray levels in I is significantly different from the average value of f S applied to the gray levels in J. Finally, histogram contrast analysis is used to obtain a seventh-order polynomial approximation of f S.
Characterization of Strombolian events by using independent component analysis
NASA Astrophysics Data System (ADS)
Ciaramella, A.; de Lauro, E.; de Martino, S.; di Lieto, B.; Falanga, M.; Tagliaferri, R.
2004-10-01
We apply Independent Component Analysis (ICA) to seismic signals recorded at Stromboli volcano. Firstly, we show how ICA works considering synthetic signals, which are generated by dynamical systems. We prove that Strombolian signals, both tremor and explosions, in the high frequency band (>0.5 Hz), are similar in time domain. This seems to give some insights to the organ pipe model generation for the source of these events. Moreover, we are able to recognize in the tremor signals a low frequency component (<0.5 Hz), with a well defined peak corresponding to 30s.
NASA Technical Reports Server (NTRS)
Didlake, Anthony C., Jr.; Heymsfield, Gerald M.; Tian, Lin; Guimond, Stephen R.
2015-01-01
The coplane analysis technique for mapping the three-dimensional wind field of precipitating systems is applied to the NASA High Altitude Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a dual-frequency Doppler radar system with two downward pointing and conically scanning beams. The coplane technique interpolates radar measurements to a natural coordinate frame, directly solves for two wind components, and integrates the mass continuity equation to retrieve the unobserved third wind component. This technique is tested using a model simulation of a hurricane and compared to a global optimization retrieval. The coplane method produced lower errors for the cross-track and vertical wind components, while the global optimization method produced lower errors for the along-track wind component. Cross-track and vertical wind errors were dependent upon the accuracy of the estimated boundary condition winds near the surface and at nadir, which were derived by making certain assumptions about the vertical velocity field. The coplane technique was then applied successfully to HIWRAP observations of Hurricane Ingrid (2013). Unlike the global optimization method, the coplane analysis allows for a transparent connection between the radar observations and specific analysis results. With this ability, small-scale features can be analyzed more adequately and erroneous radar measurements can be identified more easily.
Surzhikov, V D; Surzhikov, D V
2014-01-01
The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
USDA-ARS?s Scientific Manuscript database
Soil science research is increasingly applying Fourier transform infrared (FTIR) spectroscopy for analysis of soil organic matter (SOM). However, the compositional complexity of soils and the dominance of the mineral component can limit spectroscopic resolution of SOM and other minor components. The...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Timbie, Peter T.; Bunn, Emory F.
In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approachmore » can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.« less
Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat
2010-08-01
Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food.
Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat
2010-01-01
Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food. PMID:24575193
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
Pigatto, Andre V; Moura, Karina O A; Favieiro, Gabriela W; Balbinot, Alexandre
2016-08-01
This report describes the development of a force platform based on instrumented load cells with built-in conditioning circuit and strain gages to measure and acquire the components of the force that is applied to the bike crank arm during pedaling in real conditions, and save them on a SD Card. To accomplish that, a complete new crank arm 3D solid model was developed in the SolidWorks, with dimensions equivalent to a commercial crank set and compatible with a conventional road bike, but with a compartment to support all the electronics necessary to measure 3 components of the force applied to the pedal during pedaling. After that, a 6082 T6 Aluminum Crankset based on the solid model was made and instrumented with three Wheatstone bridges each. The signals were conditioned on a printed circuit board, made on SMD technology, and acquired using a microcontroller with a DAC. Static deformation analysis showed a linearity error below 0.6% for all six channels. Dynamic analysis showed a natural frequency above 136Hz. A one-factor experiment design was performed with 5 amateur cyclists. ANOVA showed that the cyclist weight causes significant variation on the force applied to the bicycle pedal and its bilateral symmetry.
NASA Technical Reports Server (NTRS)
Thomas, F. P.
2006-01-01
Aerospace structures utilize innovative, lightweight composite materials for exploration activities. These structural components, due to various reasons including size limitations, manufacturing facilities, contractual obligations, or particular design requirements, will have to be joined. The common methodologies for joining composite components are the adhesively bonded and mechanically fastened joints and, in certain instances, both methods are simultaneously incorporated into the design. Guidelines and recommendations exist for engineers to develop design criteria and analyze and test composites. However, there are no guidelines or recommendations based on analysis or test data to specify a torque or torque range to apply to metallic mechanical fasteners used to join composite components. Utilizing the torque tension machine at NASA s Marshall Space Flight Center, an initial series of tests were conducted to determine the maximum torque that could be applied to a composite specimen. Acoustic emissions were used to nondestructively assess the specimens during the tests and thermographic imaging after the tests.
1988-06-01
Di’Lt. ibu601’. I j I o; DTIC Qt.ALTTY I ,2,1 4 AMERICAN POWER JET COMPANY RIDGEFIELD, NJ FALLS CHURCH...The logic is applied to each reparable item in the system/equipment. When the components have been analyzed, an overall system/equipment analysis is...in the AMSDL as applicable to the referenced DIDs of interest. 5. Apply staff experience in logistics support analysis to assure that the intent of the
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
NASA Astrophysics Data System (ADS)
Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.
2017-07-01
Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.
Differentiation of tea varieties using UV-Vis spectra and pattern recognition techniques
NASA Astrophysics Data System (ADS)
Palacios-Morillo, Ana; Alcázar, Ángela.; de Pablos, Fernando; Jurado, José Marcos
2013-02-01
Tea, one of the most consumed beverages all over the world, is of great importance in the economies of a number of countries. Several methods have been developed to classify tea varieties or origins based in pattern recognition techniques applied to chemical data, such as metal profile, amino acids, catechins and volatile compounds. Some of these analytical methods become tedious and expensive to be applied in routine works. The use of UV-Vis spectral data as discriminant variables, highly influenced by the chemical composition, can be an alternative to these methods. UV-Vis spectra of methanol-water extracts of tea have been obtained in the interval 250-800 nm. Absorbances have been used as input variables. Principal component analysis was used to reduce the number of variables and several pattern recognition methods, such as linear discriminant analysis, support vector machines and artificial neural networks, have been applied in order to differentiate the most common tea varieties. A successful classification model was built by combining principal component analysis and multilayer perceptron artificial neural networks, allowing the differentiation between tea varieties. This rapid and simple methodology can be applied to solve classification problems in food industry saving economic resources.
[Identification of two varieties of Citri Fructus by fingerprint and chemometrics].
Su, Jing-hua; Zhang, Chao; Sun, Lei; Gu, Bing-ren; Ma, Shuang-cheng
2015-06-01
Citri Fructus identification by fingerprint and chemometrics was investigated in this paper. Twenty-three Citri Fructus samples were collected which referred to two varieties as Cirtus wilsonii and C. medica recorded in Chinese Pharmacopoeia. HPLC chromatograms were obtained. The components were partly identified by reference substances, and then common pattern was established for chemometrics analysis. Similarity analysis, principal component analysis (PCA) , partial least squares-discriminant analysis (PLS-DA) and hierarchical cluster analysis heatmap were applied. The results indicated that C. wilsonii and C. medica could be ideally classified with common pattern contained twenty-five characteristic peaks. Besides, preliminary pattern recognition had verified the chemometrics analytical results. Absolute peak area (APA) was used for relevant quantitative analysis, results showed the differences between two varieties and it was valuable for further quality control as selection of characteristic components.
Mujica Ascencio, Saul; Choe, ChunSik; Meinke, Martina C; Müller, Rainer H; Maksimov, George V; Wigger-Alberti, Walter; Lademann, Juergen; Darvin, Maxim E
2016-07-01
Propylene glycol is one of the known substances added in cosmetic formulations as a penetration enhancer. Recently, nanocrystals have been employed also to increase the skin penetration of active components. Caffeine is a component with many applications and its penetration into the epidermis is controversially discussed in the literature. In the present study, the penetration ability of two components - caffeine nanocrystals and propylene glycol, applied topically on porcine ear skin in the form of a gel, was investigated ex vivo using two confocal Raman microscopes operated at different excitation wavelengths (785nm and 633nm). Several depth profiles were acquired in the fingerprint region and different spectral ranges, i.e., 526-600cm(-1) and 810-880cm(-1) were chosen for independent analysis of caffeine and propylene glycol penetration into the skin, respectively. Multivariate statistical methods such as principal component analysis (PCA) and linear discriminant analysis (LDA) combined with Student's t-test were employed to calculate the maximum penetration depths of each substance (caffeine and propylene glycol). The results show that propylene glycol penetrates significantly deeper than caffeine (20.7-22.0μm versus 12.3-13.0μm) without any penetration enhancement effect on caffeine. The results confirm that different substances, even if applied onto the skin as a mixture, can penetrate differently. The penetration depths of caffeine and propylene glycol obtained using two different confocal Raman microscopes are comparable showing that both types of microscopes are well suited for such investigations and that multivariate statistical PCA-LDA methods combined with Student's t-test are very useful for analyzing the penetration of different substances into the skin. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mitsutake, Ayori; Takano, Hiroshi
2015-09-01
It is important to extract reaction coordinates or order parameters from protein simulations in order to investigate the local minimum-energy states and the transitions between them. The most popular method to obtain such data is principal component analysis, which extracts modes of large conformational fluctuations around an average structure. We recently applied relaxation mode analysis for protein systems, which approximately estimates the slow relaxation modes and times from a simulation and enables investigations of the dynamic properties underlying the structural fluctuations of proteins. In this study, we apply this relaxation mode analysis to extract reaction coordinates for a system in which there are large conformational changes such as those commonly observed in protein folding/unfolding. We performed a 750-ns simulation of chignolin protein near its folding transition temperature and observed many transitions between the most stable, misfolded, intermediate, and unfolded states. We then applied principal component analysis and relaxation mode analysis to the system. In the relaxation mode analysis, we could automatically extract good reaction coordinates. The free-energy surfaces provide a clearer understanding of the transitions not only between local minimum-energy states but also between the folded and unfolded states, even though the simulation involved large conformational changes. Moreover, we propose a new analysis method called Markov state relaxation mode analysis. We applied the new method to states with slow relaxation, which are defined by the free-energy surface obtained in the relaxation mode analysis. Finally, the relaxation times of the states obtained with a simple Markov state model and the proposed Markov state relaxation mode analysis are compared and discussed.
Tipping point analysis of ocean acoustic noise
NASA Astrophysics Data System (ADS)
Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen
2018-02-01
We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.
Electromagnetic fields in curved spacetimes
NASA Astrophysics Data System (ADS)
Tsagas, Christos G.
2005-01-01
We consider the evolution of electromagnetic fields in curved spacetimes and calculate the exact wave equations for the associated electric and magnetic components. Our analysis is fully covariant, applies to a general spacetime and isolates all the sources that affect the propagation of these waves. Among others, we explicitly show how the different components of the gravitational field act as driving sources of electromagnetic disturbances. When applied to perturbed Friedmann Robertson Walker cosmologies, our results argue for a superadiabatic-type amplification of large-scale cosmological magnetic fields in Friedmann models with open spatial curvature.
Groundwater quality assessment of urban Bengaluru using multivariate statistical techniques
NASA Astrophysics Data System (ADS)
Gulgundi, Mohammad Shahid; Shetty, Amba
2018-03-01
Groundwater quality deterioration due to anthropogenic activities has become a subject of prime concern. The objective of the study was to assess the spatial and temporal variations in groundwater quality and to identify the sources in the western half of the Bengaluru city using multivariate statistical techniques. Water quality index rating was calculated for pre and post monsoon seasons to quantify overall water quality for human consumption. The post-monsoon samples show signs of poor quality in drinking purpose compared to pre-monsoon. Cluster analysis (CA), principal component analysis (PCA) and discriminant analysis (DA) were applied to the groundwater quality data measured on 14 parameters from 67 sites distributed across the city. Hierarchical cluster analysis (CA) grouped the 67 sampling stations into two groups, cluster 1 having high pollution and cluster 2 having lesser pollution. Discriminant analysis (DA) was applied to delineate the most meaningful parameters accounting for temporal and spatial variations in groundwater quality of the study area. Temporal DA identified pH as the most important parameter, which discriminates between water quality in the pre-monsoon and post-monsoon seasons and accounts for 72% seasonal assignation of cases. Spatial DA identified Mg, Cl and NO3 as the three most important parameters discriminating between two clusters and accounting for 89% spatial assignation of cases. Principal component analysis was applied to the dataset obtained from the two clusters, which evolved three factors in each cluster, explaining 85.4 and 84% of the total variance, respectively. Varifactors obtained from principal component analysis showed that groundwater quality variation is mainly explained by dissolution of minerals from rock water interactions in the aquifer, effect of anthropogenic activities and ion exchange processes in water.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
NASA Technical Reports Server (NTRS)
Chatterjee, Sharmista
1993-01-01
Our first goal in this project was to perform a systems analysis of a closed loop Environmental Control Life Support System (ECLSS). This pertains to the development of a model of an existing real system from which to assess the state or performance of the existing system. Systems analysis is applied to conceptual models obtained from a system design effort. For our modelling purposes we used a simulator tool called ASPEN (Advanced System for Process Engineering). Our second goal was to evaluate the thermodynamic efficiency of the different components comprising an ECLSS. Use is made of the second law of thermodynamics to determine the amount of irreversibility of energy loss of each component. This will aid design scientists in selecting the components generating the least entropy, as our penultimate goal is to keep the entropy generation of the whole system at a minimum.
NASA Technical Reports Server (NTRS)
Williams, R. E.; Kruger, R.
1980-01-01
Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.
Technology in Nonformal Education: A Critical Appraisal. Issues in Nonformal Education No. 2.
ERIC Educational Resources Information Center
Evans, David R.
In analyzing efforts to utilize technology in nonformal education programs, the applied communications aspects of instructional technology are most relevant, and locus of control and the technology of educational organization are two major components of analysis. Growing out of these components is the increasing recognition that educational…
Luce, Robert; Hildebrandt, Peter; Kuhlmann, Uwe; Liesen, Jörg
2016-09-01
The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for nonnegative matrix factorization that is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with the vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1998-05-01
Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.
Independent component analysis applied to long bunch beams in the Los Alamos Proton Storage Ring
NASA Astrophysics Data System (ADS)
Kolski, Jeffrey S.; Macek, Robert J.; McCrady, Rodney C.; Pang, Xiaoying
2012-11-01
Independent component analysis (ICA) is a powerful blind source separation (BSS) method. Compared to the typical BSS method, principal component analysis, ICA is more robust to noise, coupling, and nonlinearity. The conventional ICA application to turn-by-turn position data from multiple beam position monitors (BPMs) yields information about cross-BPM correlations. With this scheme, multi-BPM ICA has been used to measure the transverse betatron phase and amplitude functions, dispersion function, linear coupling, sextupole strength, and nonlinear beam dynamics. We apply ICA in a new way to slices along the bunch revealing correlations of particle motion within the beam bunch. We digitize beam signals of the long bunch at the Los Alamos Proton Storage Ring with a single device (BPM or fast current monitor) for an entire injection-extraction cycle. ICA of the digitized beam signals results in source signals, which we identify to describe varying betatron motion along the bunch, locations of transverse resonances along the bunch, measurement noise, characteristic frequencies of the digitizing oscilloscopes, and longitudinal beam structure.
NASA Astrophysics Data System (ADS)
Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.
The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.
Engine Data Interpretation System (EDIS)
NASA Technical Reports Server (NTRS)
Cost, Thomas L.; Hofmann, Martin O.
1990-01-01
A prototype of an expert system was developed which applies qualitative or model-based reasoning to the task of post-test analysis and diagnosis of data resulting from a rocket engine firing. A combined component-based and process theory approach is adopted as the basis for system modeling. Such an approach provides a framework for explaining both normal and deviant system behavior in terms of individual component functionality. The diagnosis function is applied to digitized sensor time-histories generated during engine firings. The generic system is applicable to any liquid rocket engine but was adapted specifically in this work to the Space Shuttle Main Engine (SSME). The system is applied to idealized data resulting from turbomachinery malfunction in the SSME.
Yang, Guang; Zhao, Xin; Wen, Jun; Zhou, Tingting; Fan, Guorong
2017-04-01
An analytical approach including fingerprint, quantitative analysis and rapid screening of anti-oxidative components was established and successfully applied for the comprehensive quality control of Rhizoma Smilacis Glabrae (RSG), a well-known Traditional Chinese Medicine with the homology of medicine and food. Thirteen components were tentatively identified based on their retention behavior, UV absorption and MS fragmentation patterns. Chemometric analysis based on coulmetric array data was performed to evaluate the similarity and variation between fifteen batches. Eight discriminating components were quantified using single-compound calibration. The unit responses of those components in coulmetric array detection were calculated and compared with those of several compounds reported to possess antioxidant activity, and four of them were tentatively identified as main contributors to the total anti-oxidative activity. The main advantage of the proposed approach was that it realized simultaneous fingerprint, quantitative analysis and screening of anti-oxidative components, providing comprehensive information for quality assessment of RSG. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Boyd, R. K.; Brumfield, J. O.; Campbell, W. J.
1984-01-01
Three feature extraction methods, canonical analysis (CA), principal component analysis (PCA), and band selection, have been applied to Thematic Mapper Simulator (TMS) data in order to evaluate the relative performance of the methods. The results obtained show that CA is capable of providing a transformation of TMS data which leads to better classification results than provided by all seven bands, by PCA, or by band selection. A second conclusion drawn from the study is that TMS bands 2, 3, 4, and 7 (thermal) are most important for landcover classification.
NASA Astrophysics Data System (ADS)
Li, Qian; Tang, Yongjiao; Yan, Zhiwei; Zhang, Pudun
2017-06-01
Although multivariate curve resolution (MCR) has been applied to the analysis of Fourier transform infrared (FTIR) imaging, it is still problematic to determine the number of components. The reported methods at present tend to cause the components of low concentration missed. In this paper a new idea was proposed to resolve this problem. First, MCR calculation was repeated by increasing the number of components sequentially, then each retrieved pure spectrum of as-resulted MCR component was directly compared with a real-world pixel spectrum of the local high concentration in the corresponding MCR map. One component was affirmed only if the characteristic bands of the MCR component had been included in its pixel spectrum. This idea was applied to attenuated total reflection (ATR)/FTIR mapping for identifying the trace additives in blind polymer materials and satisfactory results were acquired. The successful demonstration of this novel approach opens up new possibilities for analyzing additives in polymer materials.
Steingass, Christof Björn; Jutzi, Manfred; Müller, Jenny; Carle, Reinhold; Schmarr, Hans-Georg
2015-03-01
Ripening-dependent changes of pineapple volatiles were studied in a nontargeted profiling analysis. Volatiles were isolated via headspace solid phase microextraction and analyzed by comprehensive 2D gas chromatography and mass spectrometry (HS-SPME-GC×GC-qMS). Profile patterns presented in the contour plots were evaluated applying image processing techniques and subsequent multivariate statistical data analysis. Statistical methods comprised unsupervised hierarchical cluster analysis (HCA) and principal component analysis (PCA) to classify the samples. Supervised partial least squares discriminant analysis (PLS-DA) and partial least squares (PLS) regression were applied to discriminate different ripening stages and describe the development of volatiles during postharvest storage, respectively. Hereby, substantial chemical markers allowing for class separation were revealed. The workflow permitted the rapid distinction between premature green-ripe pineapples and postharvest-ripened sea-freighted fruits. Volatile profiles of fully ripe air-freighted pineapples were similar to those of green-ripe fruits postharvest ripened for 6 days after simulated sea freight export, after PCA with only two principal components. However, PCA considering also the third principal component allowed differentiation between air-freighted fruits and the four progressing postharvest maturity stages of sea-freighted pineapples.
NASA Astrophysics Data System (ADS)
Pacholski, Michaeleen L.
2004-06-01
Principal component analysis (PCA) has been successfully applied to time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra, images and depth profiles. Although SIMS spectral data sets can be small (in comparison to datasets typically discussed in literature from other analytical techniques such as gas or liquid chromatography), each spectrum has thousands of ions resulting in what can be a difficult comparison of samples. Analysis of industrially-derived samples means the identity of most surface species are unknown a priori and samples must be analyzed rapidly to satisfy customer demands. PCA enables rapid assessment of spectral differences (or lack there of) between samples and identification of chemically different areas on sample surfaces for images. Depth profile analysis helps define interfaces and identify low-level components in the system.
Probabilistic Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Strack, William C.; Nagpal, Vinod K.
2010-01-01
PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.
A finite element model of the human head for auditory bone conduction simulation.
Taschke, Henning; Hudde, Herbert
2006-01-01
In order to investigate the mechanisms of bone conduction, a finite element model of the human head was developed. The most important steps of the modelling process are described. The model was excited by means of percutaneously applied forces in order to get a deeper insight into the way the parts of the peripheral hearing organ and the surrounding tissue vibrate. The analysis is done based on the division of the bone conduction mechanisms into components. The frequency-dependent patterns of vibration of the components are analyzed. Furthermore, the model allows for the calculation of the contribution of each component to the overall bone-conducted sound. The components interact in a complicated way, which strongly depends on the nature of the excitation and the spatial region to which it is applied.
Dynamical Analysis of Stock Market Instability by Cross-correlation Matrix
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2016-08-01
We study stock market instability by using cross-correlations constructed from the return time series of 366 stocks traded on the Tokyo Stock Exchange from January 5, 1998 to December 30, 2013. To investigate the dynamical evolution of the cross-correlations, crosscorrelation matrices are calculated with a rolling window of 400 days. To quantify the volatile market stages where the potential risk is high, we apply the principal components analysis and measure the cumulative risk fraction (CRF), which is the system variance associated with the first few principal components. From the CRF, we detected three volatile market stages corresponding to the bankruptcy of Lehman Brothers, the 2011 Tohoku Region Pacific Coast Earthquake, and the FRB QE3 reduction observation in the study period. We further apply the random matrix theory for the risk analysis and find that the first eigenvector is more equally de-localized when the market is volatile.
Quantitation of flavonoid constituents in citrus fruits.
Kawaii, S; Tomono, Y; Katase, E; Ogawa, K; Yano, M
1999-09-01
Twenty-four flavonoids have been determined in 66 Citrus species and near-citrus relatives, grown in the same field and year, by means of reversed phase high-performance liquid chromatography analysis. Statistical methods have been applied to find relations among the species. The F ratios of 21 flavonoids obtained by applying ANOVA analysis are significant, indicating that a classification of the species using these variables is reasonable to pursue. Principal component analysis revealed that the distributions of Citrus species belonging to different classes were largely in accordance with Tanaka's classification system.
Fetal ECG Extraction From Maternal Body Surface Measurement Using Independent Component Analysis
2001-10-25
Ibaraki 305-0901, Japan Abstract – A method applying independent component analysis (ICA) to detect the electrocardiogram of a prenatal cattle foetus is...monitoring the health status of an unborn cattle foetus is indispensable in preventing natural abortion and premature birth [3]. One of the applicable...and Y. Honda, “ECG and Heart Rate Detection of Prenatal Cattle Foetus Using Adaptive Digital Filtering,” World Congress on Med. Phys.& Biomed. Eng., Chicago TU-CXH-75, pp. 1-4, 2000.
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed. PMID:965233
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed.
Design and optimization of the CFRP mirror components
NASA Astrophysics Data System (ADS)
Wei, Lei; Zhang, Lei; Gong, Xiaoxue
2017-09-01
As carbon fiber reinforced polymer (CFRP) material has been developed and demonstrated as an effective material in lightweight telescope reflector manufacturing recently, the authors of this article have extended to apply this material on the lightweight space camera mirror design and fabrication. By CFRP composite laminate design and optimization using finite element method (FEM) analysis, a spherical mirror with φ316 mm diameter whose core cell reinforcement is an isogrid configuration is fabricated. Compared with traditional ways of applying ultra-low-expansion glass (ULE) on the CFRP mirror surface, the method of nickel electroplating on the surface effectively reduces the processing cost and difficulty of the CFRP mirror. Through the FEM analysis, the first order resonance frequency of the CFRP mirror components reaches up to 652.3 Hz. Under gravity affection coupling with +5°C temperature rising, the mirror surface shape root-mean-square values (RMS) at the optical axis horizontal state is 5.74 nm, which meets mechanical and optical requirements of the mirror components on space camera.
Coastal modification of a scene employing multispectral images and vector operators.
Lira, Jorge
2017-05-01
Changes in sea level, wind patterns, sea current patterns, and tide patterns have produced morphologic transformations in the coastline area of Tamaulipas Sate in North East Mexico. Such changes generated a modification of the coastline and variations of the texture-relief and texture of the continental area of Tamaulipas. Two high-resolution multispectral satellite Satellites Pour l'Observation de la Terre images were employed to quantify the morphologic change of such continental area. The images cover a time span close to 10 years. A variant of the principal component analysis was used to delineate the modification of the land-water line. To quantify changes in texture-relief and texture, principal component analysis was applied to the multispectral images. The first principal components of each image were modeled as a discrete bidimensional vector field. The divergence and Laplacian vector operators were applied to the discrete vector field. The divergence provided the change of texture, while the Laplacian produced the change of texture-relief in the area of study.
Aznar, Margarita; Arroyo, Teresa
2007-09-21
The purge-and-trap extraction method, coupled to a gas chromatograph with mass spectrometry detection, has been applied to the determination of 26 aromatic volatiles in wine. The method was optimized, validated and applied to the analyses of 40 red and white wines from 7 different Spanish regions. Principal components analyses of data showed the correlation between wines of similar origin.
2007-10-01
1984. Complex principal component analysis : Theory and examples. Journal of Climate and Applied Meteorology 23: 1660-1673. Hotelling, H. 1933...Sediments 99. ASCE: 2,566-2,581. Von Storch, H., and A. Navarra. 1995. Analysis of climate variability. Applications of statistical techniques. Berlin...ERDC TN-SWWRP-07-9 October 2007 Regional Morphology Empirical Analysis Package (RMAP): Orthogonal Function Analysis , Background and Examples by
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
Component analysis and synthesis of dark circles under the eyes using a spectral image
NASA Astrophysics Data System (ADS)
Akaho, Rina; Hirose, Misa; Ojima, Nobutoshi; Igarashi, Takanori; Tsumura, Norimichi
2017-02-01
This paper proposes to apply nonlinear estimation of chromophore concentrations: melanin, oxy-hemoglobin, deoxyhemoglobin and shading to the real hyperspectral image of skin. Skin reflectance is captured in the wavelengths between 400nm and 700nm by hyperspectral scanner. Five-band wavelengths data are selected from skin reflectance. By using the cubic function which obtained by Monte Carlo simulation of light transport in multi-layered tissue, chromophore concentrations and shading are determined by minimize residual sum of squares of reflectance. When dark circles are appeared under the eyes, the subject looks tired and older. Therefore, woman apply cosmetic cares to remove dark circles. It is not clear about the relationship between color and chromophores distribution in the dark circles. Here, we applied the separation method of the skin four components to hyperspectral image of dark circle, and the separated components are modulated and synthesized. The synthesized images are evaluated to know which components are contributed into the appearance of dark circles. Result of the evaluation shows that the cause of dark circles for the one subject was mainly melanin pigmentation.
1988-09-01
applies to a one Air Transport Rack (ATR) volume LRU in an airborne, uninhabited, fighter environment.) The goal is to have a 2000 hour mean time between...benefits of applying reliability and 11 maintainability improvements to these weapon systems or components. Examples will be given in this research of...where the Pareto Principle applies . The Pareto analysis applies 25 to field failure types as well as to shop defect types. In the following automotive
Hu, Lianghai; Li, Xin; Feng, Shun; Kong, Liang; Su, Xingye; Chen, Xueguo; Qin, Feng; Ye, Mingliang; Zou, Hanfa
2006-04-01
A mode of comprehensive 2-D LC was developed by coupling a silica-bonded HSA column to a silica monolithic ODS column. This system combined the affinity property of the HSA column and the high-speed separation ability of the monolithic ODS column. The affinity chromatography with HSA-immobilized stationary phase was applied to study the interaction of multiple components in traditional Chinese medicines (TCMs) with HSA according to their affinity to protein in the first dimension. Then the unresolved components retained on the HSA column were further separated on the silica monolithic ODS column in the second dimension. By hyphenating the 2-D separation system to diode array detector and MS detectors, the UV and molecular weight information of the separated compounds can also be obtained. The developed separation system was applied to analysis of the extract of Rheum palmatum L., a number of low-abundant components can be separated on a single peak from the HSA column after normalization of peak heights. Six compounds were preliminarily identified according to their UV and MS spectra. It showed that this system was very useful for biological fingerprinting analysis of the components in TCMs and natural products.
Development of a software tool to support chemical and biological terrorism intelligence analysis
NASA Astrophysics Data System (ADS)
Hunt, Allen R.; Foreman, William
1997-01-01
AKELA has developed a software tool which uses a systems analytic approach to model the critical processes which support the acquisition of biological and chemical weapons by terrorist organizations. This tool has four major components. The first is a procedural expert system which describes the weapon acquisition process. It shows the relationship between the stages a group goes through to acquire and use a weapon, and the activities in each stage required to be successful. It applies to both state sponsored and small group acquisition. An important part of this expert system is an analysis of the acquisition process which is embodied in a list of observables of weapon acquisition activity. These observables are cues for intelligence collection The second component is a detailed glossary of technical terms which helps analysts with a non- technical background understand the potential relevance of collected information. The third component is a linking capability which shows where technical terms apply to the parts of the acquisition process. The final component is a simple, intuitive user interface which shows a picture of the entire process at a glance and lets the user move quickly to get more detailed information. This paper explains e each of these five model components.
Bhaganagarapu, Kaushik; Jackson, Graeme D; Abbott, David F
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available.
An Automated Method for Identifying Artifact in Independent Component Analysis of Resting-State fMRI
Bhaganagarapu, Kaushik; Jackson, Graeme D.; Abbott, David F.
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available. PMID:23847511
Screening of polar components of petroleum products by electrospray ionization mass spectrometry
Rostad, Colleen E.
2005-01-01
The polar components of fuels may enable differentiation between fuel types or commercial fuel sources. Screening for these components in the hydrocarbon product is difficult due to their very low concentrations in such a complex matrix. Various commercial fuels from several sources were analyzed by flow injection analysis/electrospray ionization/mass spectrometry without extensive sample preparation, separation, or chromatography. This technique enabled screening for unique polar components at very low concentrations in commercial hydrocarbon products. This analysis was then applied to hydrocarbon samples collected from the subsurface with a different extent of biodegradation or weathering. Although the alkane and isoprenoid portion had begun to biodegrade or weather, the polar components had changed little over time. Because these polar compounds are unique in different fuels, this screening technique can provide source information on hydrocarbons released into the environment.
Appliance of Independent Component Analysis to System Intrusion Analysis
NASA Astrophysics Data System (ADS)
Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji
In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.
Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin
2016-11-01
J. Sep. Sci. 2016, 39, 4147-4157 DOI: 10.1002/jssc.201600284 Yinchenhao decoction (YCHD) is a famous Chinese herbal formula recorded in the Shang Han Lun which was prescribed by Zhongjing Zhang during 150-219 AD. A novel quantitative analysis method was developed, based on ultrahigh performance liquid chromatography coupled with a diode array detector for the simultaneous determination of 14 main active components in Yinchenhao decoction. Furthermore, the method has been applied for compositional difference analysis of the 14 components in eight normal extraction samples of Yinchenhao decoction, with the aid of hierarchical clustering analysis and similarity analysis. The present research could help hospital, factory and lab choose the best way to make Yinchenhao decoction with better efficacy. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nuclear Forensics Applications of Principal Component Analysis on Micro X-ray Fluorescence Images
analysis on quantified micro x-ray fluorescence intensity values. This method is then applied to address goals of nuclear forensics . Thefirst...researchers in the development and validation of nuclear forensics methods. A method for determining material homogeneity is developed and demonstrated
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
An FT-raman study of softwood, hardwood, and chemically modified black spruce MWLS
Umesh P. Agarwal; James D. McSweeny; Sally A. Ralph
1999-01-01
Raman spectroscopy is being increasingly used to carry out in situ analysis of wood and other lignocellulosics. To obtain useful information from the spectra, the vibrational bands need to be assigned in terms of contributions from various chemical components and component sub-structures. In additional, so that the technique can be better applied as an analytical...
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli
2006-09-01
A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.
Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components
Wang, Min; Kornblau, Steven M; Coombes, Kevin R
2018-01-01
Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable. PMID:29881252
Model-free fMRI group analysis using FENICA.
Schöpf, V; Windischberger, C; Robinson, S; Kasess, C H; Fischmeister, F PhS; Lanzenberger, R; Albrecht, J; Kleemann, A M; Kopietz, R; Wiesmann, M; Moser, E
2011-03-01
Exploratory analysis of functional MRI data allows activation to be detected even if the time course differs from that which is expected. Independent Component Analysis (ICA) has emerged as a powerful approach, but current extensions to the analysis of group studies suffer from a number of drawbacks: they can be computationally demanding, results are dominated by technical and motion artefacts, and some methods require that time courses be the same for all subjects or that templates be defined to identify common components. We have developed a group ICA (gICA) method which is based on single-subject ICA decompositions and the assumption that the spatial distribution of signal changes in components which reflect activation is similar between subjects. This approach, which we have called Fully Exploratory Network Independent Component Analysis (FENICA), identifies group activation in two stages. ICA is performed on the single-subject level, then consistent components are identified via spatial correlation. Group activation maps are generated in a second-level GLM analysis. FENICA is applied to data from three studies employing a wide range of stimulus and presentation designs. These are an event-related motor task, a block-design cognition task and an event-related chemosensory experiment. In all cases, the group maps identified by FENICA as being the most consistent over subjects correspond to task activation. There is good agreement between FENICA results and regions identified in prior GLM-based studies. In the chemosensory task, additional regions are identified by FENICA and temporal concatenation ICA that we show is related to the stimulus, but exhibit a delayed response. FENICA is a fully exploratory method that allows activation to be identified without assumptions about temporal evolution, and isolates activation from other sources of signal fluctuation in fMRI. It has the advantage over other gICA methods that it is computationally undemanding, spotlights components relating to activation rather than artefacts, allows the use of familiar statistical thresholding through deployment of a higher level GLM analysis and can be applied to studies where the paradigm is different for all subjects. Copyright © 2010 Elsevier Inc. All rights reserved.
Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong
2015-11-17
We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.
Steinhauser, Marco; Hübner, Ronald
2009-10-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J
2015-02-20
The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Norinder, Ulf
1990-12-01
An experimental design based 3-D QSAR analysis using a combination of principal component and PLS analysis is presented and applied to human corticosteroid-binding globulin complexes. The predictive capability of the created model is good. The technique can also be used as guidance when selecting new compounds to be investigated.
Least-dependent-component analysis based on mutual information
NASA Astrophysics Data System (ADS)
Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter
2004-12-01
We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.
A Preliminary Analysis of a Behavioral Classrooms Needs Assessment
ERIC Educational Resources Information Center
Leaf, Justin B.; Leaf, Ronald; McCray, Cynthia; Lamkins, Carol; Taubman, Mitchell; McEachin, John; Cihon, Joseph H.
2016-01-01
Today many special education classrooms implement procedures based upon the principles of Applied Behavior Analysis (ABA) to establish educationally relevant skills and decrease aberrant behaviors. However, it is difficult for school staff and consultants to evaluate the implementation of various components of ABA and general classroom set up. In…
NASA Astrophysics Data System (ADS)
Silva, N.; Esper, A.
2012-01-01
The work presented in this article represents the results of applying RAMS analysis to a critical space control system, both at system and software levels. The system level RAMS analysis allowed the assignment of criticalities to the high level components, which was further refined by a tailored software level RAMS analysis. The importance of the software level RAMS analysis in the identification of new failure modes and its impact on the system level RAMS analysis is discussed. Recommendations of changes in the software architecture have also been proposed in order to reduce the criticality of the SW components to an acceptable minimum. The dependability analysis was performed in accordance to ECSS-Q-ST-80, which had to be tailored and complemented in some aspects. This tailoring will also be detailed in the article and lessons learned from the application of this tailoring will be shared, stating the importance to space systems safety evaluations. The paper presents the applied techniques, the relevant results obtained, the effort required for performing the tasks and the planned strategy for ROI estimation, as well as the soft skills required and acquired during these activities.
NASA Astrophysics Data System (ADS)
Struzik, Zbigniew R.; van Wijngaarden, Willem J.
We introduce a special purpose cumulative indicator, capturing in real time the cumulative deviation from the reference level of the exponent h (local roughness, Hölder exponent) of the fetal heartbeat during labour. We verify that the indicator applied to the variability component of the heartbeat coincides with the fetal outcome as determined by blood samples. The variability component is obtained from running real time decomposition of fetal heartbeat into independent components using an adaptation of an oversampled Haar wavelet transform. The particular filters used and resolutions applied are motivated by obstetricial insight/practice. The methodology described has the potential for real-time monitoring of the fetus during labour and for the prediction of the fetal outcome, allerting the attending staff in the case of (threatening) hypoxia.
Principal component greenness transformation in multitemporal agricultural Landsat data
NASA Technical Reports Server (NTRS)
Abotteen, R. A.
1978-01-01
A data compression technique for multitemporal Landsat imagery which extracts phenological growth pattern information for agricultural crops is described. The principal component greenness transformation was applied to multitemporal agricultural Landsat data for information retrieval. The transformation was favorable for applications in agricultural Landsat data analysis because of its physical interpretability and its relation to the phenological growth of crops. It was also found that the first and second greenness eigenvector components define a temporal small-grain trajectory and nonsmall-grain trajectory, respectively.
Multi-component separation and analysis of bat echolocation calls.
DiCecco, John; Gaudette, Jason E; Simmons, James A
2013-01-01
The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.
Nontidal Loading Applied in VLBI Geodetic Analysis
NASA Astrophysics Data System (ADS)
MacMillan, D. S.
2015-12-01
We investigate the application of nontidal atmosphere pressure, hydrology, and ocean loading series in the analysis of VLBI data. The annual amplitude of VLBI scale variation is reduced to less than 0.1 ppb, a result of the annual components of the vertical loading series. VLBI site vertical scatter and baseline length scatter is reduced when these loading models are applied. We operate nontidal loading services for hydrology loading (GLDAS model), atmospheric pressure loading (NCEP), and nontidal ocean loading (JPL ECCO model). As an alternative validation, we compare these loading series with corresponding series generated by other analysis centers.
Principal Component Analysis of Thermographic Data
NASA Technical Reports Server (NTRS)
Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.
2015-01-01
Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.
NASA Astrophysics Data System (ADS)
Kolski, Jeffrey
The linear lattice properties of the Proton Storage Ring (PSR) at the Los Alamos Neutron Science Center (LANSCE) in Los Alamos, NM were measured and applied to determine a better linear accelerator model. We found that the initial model was deficient in predicting the vertical focusing strength. The additional vertical focusing was located through fundamental understanding of experiment and statistically rigorous analysis. An improved model was constructed and compared against the initial model and measurement at operation set points and set points far away from nominal and was shown to indeed be an enhanced model. Independent component analysis (ICA) is a tool for data mining in many fields of science. Traditionally, ICA is applied to turn-by-turn beam position data as a means to measure the lattice functions of the real machine. Due to the diagnostic setup for the PSR, this method is not applicable. A new application method for ICA is derived, ICA applied along the length of the bunch. The ICA modes represent motions within the beam pulse. Several of the dominate ICA modes are experimentally identified.
Concepts of formal concept analysis
NASA Astrophysics Data System (ADS)
Žáček, Martin; Homola, Dan; Miarka, Rostislav
2017-07-01
The aim of this article is apply of Formal Concept Analysis on concept of world. Formal concept analysis (FCA) as a methodology of data analysis, information management and knowledge representation has potential to be applied to a verity of linguistic problems. FCA is mathematical theory for concepts and concept hierarchies that reflects an understanding of concept. Formal concept analysis explicitly formalizes extension and intension of a concept, their mutual relationships. A distinguishing feature of FCA is an inherent integration of three components of conceptual processing of data and knowledge, namely, the discovery and reasoning with concepts in data, discovery and reasoning with dependencies in data, and visualization of data, concepts, and dependencies with folding/unfolding capabilities.
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-28
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide-ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
NASA Astrophysics Data System (ADS)
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-01
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
Component pattern analysis of chemicals using multispectral THz imaging system
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki
2004-04-01
We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
Sun, Li-Li; Wang, Meng; Zhang, Hui-Jie; Liu, Ya-Nan; Ren, Xiao-Liang; Deng, Yan-Ru; Qi, Ai-Di
2018-01-01
Polygoni Multiflori Radix (PMR) is increasingly being used not just as a traditional herbal medicine but also as a popular functional food. In this study, multivariate chemometric methods and mass spectrometry were combined to analyze the ultra-high-performance liquid chromatograph (UPLC) fingerprints of PMR from six different geographical origins. A chemometric strategy based on multivariate curve resolution-alternating least squares (MCR-ALS) and three classification methods is proposed to analyze the UPLC fingerprints obtained. Common chromatographic problems, including the background contribution, baseline contribution, and peak overlap, were handled by the established MCR-ALS model. A total of 22 components were resolved. Moreover, relative species concentrations were obtained from the MCR-ALS model, which was used for multivariate classification analysis. Principal component analysis (PCA) and Ward's method have been applied to classify 72 PMR samples from six different geographical regions. The PCA score plot showed that the PMR samples fell into four clusters, which related to the geographical location and climate of the source areas. The results were then corroborated by Ward's method. In addition, according to the variance-weighted distance between cluster centers obtained from Ward's method, five components were identified as the most significant variables (chemical markers) for cluster discrimination. A counter-propagation artificial neural network has been applied to confirm and predict the effects of chemical markers on different samples. Finally, the five chemical markers were identified by UPLC-quadrupole time-of-flight mass spectrometer. Components 3, 12, 16, 18, and 19 were identified as 2,3,5,4'-tetrahydroxy-stilbene-2-O-β-d-glucoside, emodin-8-O-β-d-glucopyranoside, emodin-8-O-(6'-O-acetyl)-β-d-glucopyranoside, emodin, and physcion, respectively. In conclusion, the proposed method can be applied for the comprehensive analysis of natural samples. Copyright © 2016. Published by Elsevier B.V.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
SCADA alarms processing for wind turbine component failure detection
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Reder, M.; Melero, J. J.
2016-09-01
Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.
NASA Astrophysics Data System (ADS)
Schelkanova, Irina; Toronov, Vladislav
2011-07-01
Although near infrared spectroscopy (NIRS) is now widely used both in emerging clinical techniques and in cognitive neuroscience, the development of the apparatuses and signal processing methods for these applications is still a hot research topic. The main unresolved problem in functional NIRS is the separation of functional signals from the contaminations by systemic and local physiological fluctuations. This problem was approached by using various signal processing methods, including blind signal separation techniques. In particular, principal component analysis (PCA) and independent component analysis (ICA) were applied to the data acquired at the same wavelength and at multiple sites on the human or animal heads during functional activation. These signal processing procedures resulted in a number of principal or independent components that could be attributed to functional activity but their physiological meaning remained unknown. On the other hand, the best physiological specificity is provided by broadband NIRS. Also, a comparison with functional magnetic resonance imaging (fMRI) allows determining the spatial origin of fNIRS signals. In this study we applied PCA and ICA to broadband NIRS data to distill the components correlating with the breath hold activation paradigm and compared them with the simultaneously acquired fMRI signals. Breath holding was used because it generates blood carbon dioxide (CO2) which increases the blood-oxygen-level-dependent (BOLD) signal as CO2 acts as a cerebral vasodilator. Vasodilation causes increased cerebral blood flow which washes deoxyhaemoglobin out of the cerebral capillary bed thus increasing both the cerebral blood volume and oxygenation. Although the original signals were quite diverse, we found very few different components which corresponded to fMRI signals at different locations in the brain and to different physiological chromophores.
Carlsen, Lars; Bruggemann, Rainer
2018-06-03
In chemistry there is a long tradition in classification. Usually methods are adopted from the wide field of cluster analysis. Here, based on the example of 21 alkyl anilines we show that also concepts taken out from the mathematical discipline of partially ordered sets may also be applied. The chemical compounds are described by a multi-indicator system. For the present study four indicators, mainly taken from the field of environmental chemistry were applied and a Hasse diagram was constructed. A Hasse diagram is an acyclic, transitively reduced, triangle free graph that may have several components. The crucial question is, whether or not the Hasse diagram can be interpreted from a structural chemical point of view. This is indeed the case, but it must be clearly stated that a guarantee for meaningful results in general cannot be given. For that further theoretical work is needed. Two cluster analysis methods are applied (K-means and a hierarchical cluster method). In both cases the partitioning of the set of 21 compounds by the component structure of the Hasse diagram appears to be better interpretable. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Tailored multivariate analysis for modulated enhanced diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni
2015-10-21
Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. The multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). When applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. To develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less
Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong
2016-04-29
Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.
ERIC Educational Resources Information Center
Pacot, Giselle Mae M.; Lee, Lyn May; Chin, Sung-Tong; Marriott, Philip J.
2016-01-01
Gas chromatography-mass spectrometry (GC-MS) and GC-tandem MS (GC-MS/MS) are useful in many separation and characterization procedures. GC-MS is now a common tool in industry and research, and increasingly, GC-MS/MS is applied to the measurement of trace components in complex mixtures. This report describes an upper-level undergraduate experiment…
Modal analysis applied to circular, rectangular, and coaxial waveguides
NASA Technical Reports Server (NTRS)
Hoppe, D. J.
1988-01-01
Recent developments in the analysis of various waveguide components and feedhorns using Modal Analysis (Mode Matching Method) are summarized. A brief description of the theory is presented, and the important features of the method are pointed out. Specific examples in circular, rectangular, and coaxial waveguides are included, with comparisons between the theory and experimental measurements. Extensions to the methods are described.
A CAD Approach to Integrating NDE With Finite Element
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Downey, James; Ghosn, Louis J.; Baaklini, George Y.
2004-01-01
Nondestructive evaluation (NDE) is one of several technologies applied at NASA Glenn Research Center to determine atypical deformities, cracks, and other anomalies experienced by structural components. NDE consists of applying high-quality imaging techniques (such as x-ray imaging and computed tomography (CT)) to discover hidden manufactured flaws in a structure. Efforts are in progress to integrate NDE with the finite element (FE) computational method to perform detailed structural analysis of a given component. This report presents the core outlines for an in-house technical procedure that incorporates this combined NDE-FE interrelation. An example is presented to demonstrate the applicability of this analytical procedure. FE analysis of a test specimen is performed, and the resulting von Mises stresses and the stress concentrations near the anomalies are observed, which indicates the fidelity of the procedure. Additional information elaborating on the steps needed to perform such an analysis is clearly presented in the form of mini step-by-step guidelines.
Clerici, Nicola; Bodini, Antonio; Ferrarini, Alessandro
2004-10-01
In order to achieve improved sustainability, local authorities need to use tools that adequately describe and synthesize environmental information. This article illustrates a methodological approach that organizes a wide suite of environmental indicators into few aggregated indices, making use of correlation, principal component analysis, and fuzzy sets. Furthermore, a weighting system, which includes stakeholders' priorities and ambitions, is applied. As a case study, the described methodology is applied to the Reggio Emilia Province in Italy, by considering environmental information from 45 municipalities. Principal component analysis is used to condense an initial set of 19 indicators into 6 fundamental dimensions that highlight patterns of environmental conditions at the provincial scale. These dimensions are further aggregated in two indices of environmental performance through fuzzy sets. The simple form of these indices makes them particularly suitable for public communication, as they condensate a wide set of heterogeneous indicators. The main outcomes of the analysis and the potential applications of the method are discussed.
Gouvinhas, Irene; Machado, Nelson; Carvalho, Teresa; de Almeida, José M M M; Barros, Ana I R N A
2015-01-01
Extra virgin olive oils produced from three cultivars on different maturation stages were characterized using Raman spectroscopy. Chemometric methods (principal component analysis, discriminant analysis, principal component regression and partial least squares regression) applied to Raman spectral data were utilized to evaluate and quantify the statistical differences between cultivars and their ripening process. The models for predicting the peroxide value and free acidity of olive oils showed good calibration and prediction values and presented high coefficients of determination (>0.933). Both the R(2), and the correlation equations between the measured chemical parameters, and the values predicted by each approach are presented; these comprehend both PCR and PLS, used to assess SNV normalized Raman data, as well as first and second derivative of the spectra. This study demonstrates that a combination of Raman spectroscopy with multivariate analysis methods can be useful to predict rapidly olive oil chemical characteristics during the maturation process. Copyright © 2014 Elsevier B.V. All rights reserved.
Receiver function analysis applied to refraction survey data
NASA Astrophysics Data System (ADS)
Subaru, T.; Kyosuke, O.; Hitoshi, M.
2008-12-01
For the estimation of the thickness of oceanic crust or petrophysical investigation of subsurface material, refraction or reflection seismic exploration is one of the methods frequently practiced. These explorations use four-component (x,y,z component of acceleration and pressure) seismometer, but only compressional wave or vertical component of seismometers tends to be used in the analyses. Hence, it is needed to use shear wave or lateral component of seismograms for more precise investigation to estimate the thickness of oceanic crust. Receiver function is a function at a place that can be used to estimate the depth of velocity interfaces by receiving waves from teleseismic signal including shear wave. Receiver function analysis uses both vertical and horizontal components of seismograms and deconvolves the horizontal with the vertical to estimate the spectral difference of P-S converted waves arriving after the direct P wave. Once the phase information of the receiver function is obtained, then one can estimate the depth of the velocity interface. This analysis has advantage in the estimation of the depth of velocity interface including Mohorovicic discontinuity using two components of seismograms when P-to-S converted waves are generated at the interface. Our study presents results of the preliminary study using synthetic seismograms. First, we use three types of geological models that are composed of a single sediment layer, a crust layer, and a sloped Moho, respectively, for underground sources. The receiver function can estimate the depth and shape of Moho interface precisely for the three models. Second, We applied this method to synthetic refraction survey data generated not by earthquakes but by artificial sources on the ground or sea surface. Compressional seismic waves propagate under the velocity interface and radiate converted shear waves as well as at the other deep underground layer interfaces. However, the receiver function analysis applied to the second model cannot clearly estimate the velocity interface behind S-P converted wave or multi-reflected waves in a sediment layer. One of the causes is that the incidence angles of upcoming waves are too large compared to the underground source model due to the slanted interface. As a result, incident converted shear waves have non-negligible energy contaminating the vertical component of seismometers. Therefore, recorded refraction waves need to be transformed from depth-lateral coordinate into radial-tangential coordinate, and then Ps converted waves can be observed clearly. Finally, we applied the receiver function analysis to a more realistic model. This model has not only similar sloping Mohorovicic discontinuity and surface source locations as second model but the surface water layer. Receivers are aligned on the sea bottom (OBS; Ocean Bottom Seismometer survey case) Due to intricately bounced reflections, simulated seismic section becomes more complex than the other previously-mentioned models. In spite of the complexity in the seismic records, we could pick up the refraction waves from Moho interface, after stacking more than 20 receiver functions independently produced from each shot gather. After these processing, the receiver function analysis is justified as a method to estimate the depths of velocity interfaces and would be the applicable method for refraction wave analysis. The further study will be conducted for more realistic model that contain inhomogeneous sediment model, for example, and finally used in the inversion of the depth of velocity interfaces like Moho.
Finger crease pattern recognition using Legendre moments and principal component analysis
NASA Astrophysics Data System (ADS)
Luo, Rongfang; Lin, Tusheng
2007-03-01
The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.
Background Information and User’s Guide for MIL-F-9490
1975-01-01
requirements, although different analysis results will apply to each requirement. Basic differences between the two realibility requirements are: MIL-F-8785B...provides the rationale for establishing such limits. The specific risk analysis comprises the same data which formed the average risk analysis , except...statistical analysis will be based on statistical data taken using limited exposure Limes of components and equipment. The exposure times and resulting
Mapping brain activity in gradient-echo functional MRI using principal component analysis
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Singh, Manbir; Don, Manuel
1997-05-01
The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.
2006-04-21
C. M., and Prendergast, J. P., 2002, "Thermial Analysis of Hypersonic Inlet Flow with Exergy -Based Design Methods," International Journal of Applied...parametric study of the PS and its components is first presented in order to show the type of detailed information on internal system losses which an exergy ...Thermoeconomic Isolation Applied to the Optimal Synthesis/Design of an Advanced Fighter Aircraft System," International Journal of Thermodynamics, ICAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Karisa M.; Wood, Lianna F.; Wright, Bob W.
2005-12-01
A comprehensive two-dimensional (2D) retention time alignment algorithm was developed using a novel indexing scheme. The algorithm is termed comprehensive because it functions to correct the entire chromatogram in both dimensions and it preserves the separation information in both dimensions. Although the algorithm is demonstrated by correcting comprehensive two-dimensional gas chromatography (GC x GC) data, the algorithm is designed to correct shifting in all forms of 2D separations, such as LC x LC, LC x CE, CE x CE, and LC x GC. This 2D alignment algorithm was applied to three different data sets composed of replicate GC x GCmore » separations of (1) three 22-component control mixtures, (2) three gasoline samples, and (3) three diesel samples. The three data sets were collected using slightly different temperature or pressure programs to engender significant retention time shifting in the raw data and then demonstrate subsequent corrections of that shifting upon comprehensive 2D alignment of the data sets. Thirty 12-min GC x GC separations from three 22-component control mixtures were used to evaluate the 2D alignment performance (10 runs/mixture). The average standard deviation of the first column retention time improved 5-fold from 0.020 min (before alignment) to 0.004 min (after alignment). Concurrently, the average standard deviation of second column retention time improved 4-fold from 3.5 ms (before alignment) to 0.8 ms (after alignment). Alignment of the 30 control mixture chromatograms took 20 min. The quantitative integrity of the GC x GC data following 2D alignment was also investigated. The mean integrated signal was determined for all components in the three 22-component mixtures for all 30 replicates. The average percent difference in the integrated signal for each component before and after alignment was 2.6%. Singular value decomposition (SVD) was applied to the 22-component control mixture data before and after alignment to show the restoration of trilinearity to the data, since trilinearity benefits chemometric analysis. By applying comprehensive 2D retention time alignment to all three data sets (control mixtures, gasoline samples, and diesel samples), classification by principal component analysis (PCA) substantially improved, resulting in 100% accurate scores clustering.« less
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Navy High-Strength Steel Corrosion-Fatigue Modeling Program
2006-10-01
interest. In the global analysis, the axial loading and residual stress (via the temperature profile discussed in the previous section) were applied to...developed based on observa- tions from analyses of axial load components with sinusoidally varying surface geometries. These observations indicated that...profile parameters (height and wavelength in each surface direction) and the applied axial loading . Stress Varies Sinusoidally 180° Out of Phase
Determination of butter adulteration with margarine using Raman spectroscopy.
Uysal, Reyhan Selin; Boyaci, Ismail Hakki; Genis, Hüseyin Efe; Tamer, Ugur
2013-12-15
In this study, adulteration of butter with margarine was analysed using Raman spectroscopy combined with chemometric methods (principal component analysis (PCA), principal component regression (PCR), partial least squares (PLS)) and artificial neural networks (ANNs). Different butter and margarine samples were mixed at various concentrations ranging from 0% to 100% w/w. PCA analysis was applied for the classification of butters, margarines and mixtures. PCR, PLS and ANN were used for the detection of adulteration ratios of butter. Models were created using a calibration data set and developed models were evaluated using a validation data set. The coefficient of determination (R(2)) values between actual and predicted values obtained for PCR, PLS and ANN for the validation data set were 0.968, 0.987 and 0.978, respectively. In conclusion, a combination of Raman spectroscopy with chemometrics and ANN methods can be applied for testing butter adulteration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
Modeling Hydraulic Components for Automated FMEA of a Braking System
2014-12-23
Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...Germany struss@in.tum.de ABSTRACT This paper presents work on model-based automation of failure-modes-and-effects analysis ( FMEA ) applied to...the hydraulic part of a vehicle braking system. We describe the FMEA task and the application problem and outline the foundations for automating the
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang
2018-04-01
A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.
Software For Graphical Representation Of A Network
NASA Technical Reports Server (NTRS)
Mcallister, R. William; Mclellan, James P.
1993-01-01
System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.
Rostad, C.E.
2006-01-01
Polar components in fuels may enable differentiation between fuel types or commercial fuel sources. A range of commercial fuels from numerous sources were analyzed by flow injection analysis/electrospray ionization/mass spectrometry without extensive sample preparation, separation, or chromatography. This technique enabled screening for unique polar components at parts per million levels in commercial hydrocarbon products, including a range of products from a variety of commercial sources and locations. Because these polar compounds are unique in different fuels, their presence may provide source information on hydrocarbons released into the environment. This analysis was then applied to mixtures of various products, as might be found in accidental releases into the environment. Copyright ?? Taylor & Francis Group, LLC.
Roopwani, Rahul; Buckner, Ira S
2011-10-14
Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures. Copyright © 2011 Elsevier B.V. All rights reserved.
Tipping point analysis of atmospheric oxygen concentration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livina, V. N.; Forbes, A. B.; Vaz Martins, T. M.
2015-03-15
We apply tipping point analysis to nine observational oxygen concentration records around the globe, analyse their dynamics and perform projections under possible future scenarios, leading to oxygen deficiency in the atmosphere. The analysis is based on statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the observed data using Bayesian and wavelet techniques.
Chemical information obtained from Auger depth profiles by means of advanced factor analysis (MLCFA)
NASA Astrophysics Data System (ADS)
De Volder, P.; Hoogewijs, R.; De Gryse, R.; Fiermans, L.; Vennik, J.
1993-01-01
The advanced multivariate statistical technique "maximum likelihood common factor analysis (MLCFA)" is shown to be superior to "principal component analysis (PCA)" for decomposing overlapping peaks into their individual component spectra of which neither the number of components nor the peak shape of the component spectra is known. An examination of the maximum resolving power of both techniques, MLCFA and PCA, by means of artificially created series of multicomponent spectra confirms this finding unambiguously. Substantial progress in the use of AES as a chemical-analysis technique is accomplished through the implementation of MLCFA. Chemical information from Auger depth profiles is extracted by investigating the variation of the line shape of the Auger signal as a function of the changing chemical state of the element. In particular, MLCFA combined with Auger depth profiling has been applied to problems related to steelcord-rubber tyre adhesion. MLCFA allows one to elucidate the precise nature of the interfacial layer of reaction products between natural rubber vulcanized on a thin brass layer. This study reveals many interesting chemical aspects of the oxi-sulfidation of brass undetectable with classical AES.
Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold
NASA Astrophysics Data System (ADS)
Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong
2010-03-01
The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.
A first application of independent component analysis to extracting structure from stock returns.
Back, A D; Weigend, A S
1997-08-01
This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).
Analysis and visualization of single-trial event-related potentials
NASA Technical Reports Server (NTRS)
Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.
2001-01-01
In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
NASA Astrophysics Data System (ADS)
Oweis, Khalid J.; Berl, Madison M.; Gaillard, William D.; Duke, Elizabeth S.; Blackstone, Kaitlin; Loew, Murray H.; Zara, Jason M.
2010-03-01
This paper describes the development of novel computer-aided analysis algorithms to identify the language activation patterns at a certain Region of Interest (ROI) in Functional Magnetic Resonance Imaging (fMRI). Previous analysis techniques have been used to compare typical and pathologic activation patterns in fMRI images resulting from identical tasks but none of them analyzed activation topographically in a quantitative manner. This paper presents new analysis techniques and algorithms capable of identifying a pattern of language activation associated with localization related epilepsy. fMRI images of 64 healthy individuals and 31 patients with localization related epilepsy have been studied and analyzed on an ROI basis. All subjects are right handed with normal MRI scans and have been classified into three age groups (4-6, 7-9, 10-12 years). Our initial efforts have focused on investigating activation in the Left Inferior Frontal Gyrus (LIFG). A number of volumetric features have been extracted from the data. The LIFG has been cut into slices and the activation has been investigated topographically on a slice by slice basis. Overall, a total of 809 features have been extracted, and correlation analysis was applied to eliminate highly correlated features. Principal Component analysis was then applied to account only for major components in the data and One-Way Analysis of Variance (ANOVA) has been applied to test for significantly different features between normal and patient groups. Twenty Nine features have were found to be significantly different (p<0.05) between patient and control groups
NASA Astrophysics Data System (ADS)
Raju, B. S.; Sekhar, U. Chandra; Drakshayani, D. N.
2017-08-01
The paper investigates optimization of stereolithography process for SL5530 epoxy resin material to enhance part quality. The major characteristics indexed for performance selected to evaluate the processes are tensile strength, Flexural strength, Impact strength and Density analysis and corresponding process parameters are Layer thickness, Orientation and Hatch spacing. In this study, the process is intrinsically with multiple parameters tuning so that grey relational analysis which uses grey relational grade as performance index is specially adopted to determine the optimal combination of process parameters. Moreover, the principal component analysis is applied to evaluate the weighting values corresponding to various performance characteristics so that their relative importance can be properly and objectively desired. The results of confirmation experiments reveal that grey relational analysis coupled with principal component analysis can effectively acquire the optimal combination of process parameters. Hence, this confirm that the proposed approach in this study can be an useful tool to improve the process parameters in stereolithography process, which is very useful information for machine designers as well as RP machine users.
NASA Astrophysics Data System (ADS)
Mahmoudishadi, S.; Malian, A.; Hosseinali, F.
2017-09-01
The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
Changes among Israeli Youth Movements: A Structural Analysis Based on Kahane's Code of Informality
ERIC Educational Resources Information Center
Cohen, Erik H.
2015-01-01
Multi-dimensional data analysis tools are applied to Reuven Kahane's data on the informality of youth organizations, yielding a graphic portrayal of Kahane's code of informality. This structure helps address questions of the whether the eight structural components exhaustively cover the field without redundancy. Further, the structure is used to…
ERIC Educational Resources Information Center
Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.
2009-01-01
In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…
Zeemering, Stef; Bonizzi, Pietro; Maesen, Bart; Peeters, Ralf; Schotten, Ulrich
2015-01-01
Spatiotemporal complexity of atrial fibrillation (AF) patterns is often quantified by annotated intracardiac contact mapping. We introduce a new approach that applies recurrence plot (RP) construction followed by recurrence quantification analysis (RQA) to epicardial atrial electrograms, recorded with a high-density grid of electrodes. In 32 patients with no history of AF (aAF, n=11), paroxysmal AF (PAF, n=12) and persistent AF (persAF, n=9), RPs were constructed using a phase space electrogram embedding dimension equal to the estimated AF cycle length. Spatial information was incorporated by 1) averaging the recurrence over all electrodes, and 2) by applying principal component analysis (PCA) to the matrix of embedded electrograms and selecting the first principal component as a representation of spatial diversity. Standard RQA parameters were computed on the constructed RPs and correlated to the number of fibrillation waves per AF cycle (NW). Averaged RP RQA parameters showed no correlation with NW. Correlations improved when applying PCA, with maximum correlation achieved between RP threshold and NW (RR1%, r=0.68, p <; 0.001) and RP determinism (DET, r=-0.64, p <; 0.001). All studied RQA parameters based on the PCA RP were able to discriminate between persAF and aAF/PAF (DET persAF 0.40 ± 0.11 vs. 0.59 ± 0.14/0.62 ± 0.16, p <; 0.01). RP construction and RQA combined with PCA provide a quick and reliable tool to visualize dynamical behaviour and to assess the complexity of contact mapping patterns in AF.
Tailored multivariate analysis for modulated enhanced diffraction
Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; ...
2015-10-21
Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. Furthermore, the multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). Furthermore, when applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. In order to develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less
Genome-wide selection components analysis in a fish with male pregnancy.
Flanagan, Sarah P; Jones, Adam G
2017-04-01
A major goal of evolutionary biology is to identify the genome-level targets of natural and sexual selection. With the advent of next-generation sequencing, whole-genome selection components analysis provides a promising avenue in the search for loci affected by selection in nature. Here, we implement a genome-wide selection components analysis in the sex role reversed Gulf pipefish, Syngnathus scovelli. Our approach involves a double-digest restriction-site associated DNA sequencing (ddRAD-seq) technique, applied to adult females, nonpregnant males, pregnant males, and their offspring. An F ST comparison of allele frequencies among these groups reveals 47 genomic regions putatively experiencing sexual selection, as well as 468 regions showing a signature of differential viability selection between males and females. A complementary likelihood ratio test identifies similar patterns in the data as the F ST analysis. Sexual selection and viability selection both tend to favor the rare alleles in the population. Ultimately, we conclude that genome-wide selection components analysis can be a useful tool to complement other approaches in the effort to pinpoint genome-level targets of selection in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
NASA Astrophysics Data System (ADS)
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua
2010-08-01
In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.
Stashenko, Elena E; Martínez, Jairo R; Ruíz, Carlos A; Arias, Ginna; Durán, Camilo; Salgar, William; Cala, Mónica
2010-01-01
Chromatographic (GC/flame ionization detection, GC/MS) and statistical analyses were applied to the study of essential oils and extracts obtained from flowers, leaves, and stems of Lippia origanoides plants, growing wild in different Colombian regions. Retention indices, mass spectra, and standard substances were used in the identification of 139 substances detected in these essential oils and extracts. Principal component analysis allowed L. origanoides classification into three chemotypes, characterized according to their essential oil major components. Alpha- and beta-phellandrenes, p-cymene, and limonene distinguished chemotype A; carvacrol and thymol were the distinctive major components of chemotypes B and C, respectively. Pinocembrin (5,7-dihydroxyflavanone) was found in L. origanoides chemotype A supercritical fluid (CO(2)) extract at a concentration of 0.83+/-0.03 mg/g of dry plant material, which makes this plant an interesting source of an important bioactive flavanone with diverse potential applications in cosmetic, food, and pharmaceutical products.
Heart sound segmentation of pediatric auscultations using wavelet analysis.
Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T
2013-01-01
Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.
SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis.
Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan
2017-09-01
With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of domain wall chirality using in situ Lorentz transmission electron microscopy
Chess, Jordan J.; Montoya, Sergio A.; Fullerton, Eric E.; ...
2017-02-23
Controlling domain wall chirality is increasingly seen in non-centrosymmetric materials. Mapping chiral magnetic domains requires knowledge about all the vector components of the magnetization, which poses a problem for conventional Lorentz transmission electron microscopy (LTEM) that is only sensitive to magnetic fields perpendicular to the electron beams direction of travel. The standard approach in LTEM for determining the third component of the magnetization is to tilt the sample to some angle and record a second image. Furthermore, this presents a problem for any domain structures that are stabilized by an applied external magnetic field (e.g. skyrmions), because the standard LTEMmore » setup does not allow independent control of the angle of an applied magnetic field, and sample tilt angle. Here we show that applying a modified transport of intensity equation analysis to LTEM images collected during an applied field sweep, we can determine the domain wall chirality of labyrinth domains in a perpendicularly magnetized material, avoiding the need to tilt the sample.« less
Determination of domain wall chirality using in situ Lorentz transmission electron microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chess, Jordan J.; Montoya, Sergio A.; Fullerton, Eric E.
Controlling domain wall chirality is increasingly seen in non-centrosymmetric materials. Mapping chiral magnetic domains requires knowledge about all the vector components of the magnetization, which poses a problem for conventional Lorentz transmission electron microscopy (LTEM) that is only sensitive to magnetic fields perpendicular to the electron beams direction of travel. The standard approach in LTEM for determining the third component of the magnetization is to tilt the sample to some angle and record a second image. Furthermore, this presents a problem for any domain structures that are stabilized by an applied external magnetic field (e.g. skyrmions), because the standard LTEMmore » setup does not allow independent control of the angle of an applied magnetic field, and sample tilt angle. Here we show that applying a modified transport of intensity equation analysis to LTEM images collected during an applied field sweep, we can determine the domain wall chirality of labyrinth domains in a perpendicularly magnetized material, avoiding the need to tilt the sample.« less
Calculation and Analysis of Magnetic Gradient Tensor Components of Global Magnetic Models
NASA Astrophysics Data System (ADS)
Schiffler, M.; Queitsch, M.; Schneider, M.; Goepel, A.; Stolz, R.; Krech, W.; Meyer, H. G.; Kukowski, N.
2014-12-01
Global Earth's magnetic field models like the International Geomagnetic Reference Field (IGRF), the World Magnetic Model (WMM) or the High Definition Geomagnetic Model (HDGM) are harmonic analysis regressions to available magnetic observations stored as spherical harmonic coefficients. Input data combine recordings from magnetic observatories, airborne magnetic surveys and satellite data. The advance of recent magnetic satellite missions like SWARM and its predecessors like CHAMP offer high resolution measurements while providing a full global coverage. This deserves expansion of the theoretical framework of harmonic synthesis to magnetic gradient tensor components. Measurement setups for Full Tensor Magnetic Gradiometry equipped with high sensitive gradiometers like the JeSSY STAR system can directly measure the gradient tensor components, which requires precise knowledge about the background regional gradients which can be calculated with this extension. In this study we develop the theoretical framework for calculation of the magnetic gradient tensor components from the harmonic series expansion and apply our approach to the IGRF and HDGM. The gradient tensor component maps for entire Earth's surface produced for the IGRF show low gradients reflecting the variation from the dipolar character, whereas maps for the HDGM (up to degree N=729) reveal new information about crustal structure, especially across the oceans, and deeply situated ore bodies. From the gradient tensor components, the rotational invariants, the Eigenvalues, and the normalized source strength (NSS) are calculated. The NSS focuses on shallower and stronger anomalies. Euler deconvolution using either the tensor components or the NSS applied to the HDGM reveals an estimate of the average source depth for the entire magnetic crust as well as individual plutons and ore bodies. The NSS reveals the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European Craton.
Ozone data and mission sampling analysis
NASA Technical Reports Server (NTRS)
Robbins, J. L.
1980-01-01
A methodology was developed to analyze discrete data obtained from the global distribution of ozone. Statistical analysis techniques were applied to describe the distribution of data variance in terms of empirical orthogonal functions and components of spherical harmonic models. The effects of uneven data distribution and missing data were considered. Data fill based on the autocorrelation structure of the data is described. Computer coding of the analysis techniques is included.
Brown, C. Erwin
1993-01-01
Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.
Socaci, Sonia A; Socaciu, Carmen; Tofană, Maria; Raţi, Ioan V; Pintea, Adela
2013-01-01
The health benefits of sea buckthorn (Hippophae rhamnoides L.) are well documented due to its rich content in bioactive phytochemicals (pigments, phenolics and vitamins) as well as volatiles responsible for specific flavours and bacteriostatic action. The volatile compounds are good biomarkers of berry freshness, quality and authenticity. To develop a fast and efficient GC-MS method including a minimal sample preparation technique (in-tube extraction, ITEX) for the discrimination of sea buckthorn varieties based on their chromatographic volatile fingerprint. Twelve sea buckthorn varieties (wild and cultivated) were collected from forestry departments and experimental fields, respectively. The extraction of volatile compounds was performed using the ITEX technique whereas separation and identification was performed using a GC-MS QP-2010. Principal component analysis (PCA) was applied to discriminate the differences among sample composition. Using GC-MS analysis, from the headspace of sea buckthorn samples, 46 volatile compounds were separated with 43 being identified. The most abundant derivatives were ethyl esters of 2-methylbutanoic acid, 3-methylbutanoic acid, hexanoic acid, octanoic acid and butanoic acid, as well as 3-methylbutyl 3-methylbutanoate, 3-methylbutyl 2-methylbutanoate and benzoic acid ethyl ester (over 80% of all volatile compounds). Principal component analysis showed that the first two components explain 79% of data variance, demonstrating a good discrimination between samples. A reliable, fast and eco-friendly ITEX/GC-MS method was applied to fingerprint the volatile profile and to discriminate between wild and cultivated sea buckthorn berries originating from the Carpathians, with relevance to food science and technology. Copyright © 2013 John Wiley & Sons, Ltd.
Confirmatory Factor Analysis of the Delirium Rating Scale Revised-98 (DRS-R98).
Thurber, Steven; Kishi, Yasuhiro; Trzepacz, Paula T; Franco, Jose G; Meagher, David J; Lee, Yanghyun; Kim, Jeong-Lan; Furlanetto, Leticia M; Negreiros, Daniel; Huang, Ming-Chyi; Chen, Chun-Hsin; Kean, Jacob; Leonard, Maeve
2015-01-01
Principal components analysis applied to the Delirium Rating Scale-Revised-98 contributes to understanding the delirium construct. Using a multisite pooled international delirium database, the authors applied confirmatory factor analysis to Delirium Rating Scale-Revised-98 scores from 859 adult patients evaluated by delirium experts (delirium, N=516; nondelirium, N=343). Confirmatory factor analysis found all diagnostic features and core symptoms (cognitive, language, thought process, sleep-wake cycle, motor retardation), except motor agitation, loaded onto factor 1. Motor agitation loaded onto factor 2 with noncore symptoms (delusions, affective lability, and perceptual disturbances). Factor 1 loading supports delirium as a single construct, but when accompanied by psychosis, motor agitation's role may not be solely as a circadian activity indicator.
Carbon-carbon primary structure for SSTO vehicles
NASA Astrophysics Data System (ADS)
Croop, Harold C.; Lowndes, Holland B.
1997-01-01
A hot structures development program is nearing completion to validate use of carbon-carbon composite structure for primary load carrying members in a single-stage-to-orbit, or SSTO, vehicle. A four phase program was pursued which involved design development and fabrication of a full-scale wing torque box demonstration component. The design development included vehicle and component selection, design criteria and approach, design data development, demonstration component design and analysis, test fixture design and analysis, demonstration component test planning, and high temperature test instrumentation development. The fabrication effort encompassed fabrication of structural elements for mechanical property verification as well as fabrication of the demonstration component itself and associated test fixturing. The demonstration component features 3D woven graphite preforms, integral spars, oxidation inhibited matrix, chemical vapor deposited (CVD) SiC oxidation protection coating, and ceramic matrix composite fasteners. The demonstration component has been delivered to the United States Air Force (USAF) for testing in the Wright Laboratory Structural Test Facility, WPAFB, OH. Multiple thermal-mechanical load cycles will be applied simulating two atmospheric cruise missions and one orbital mission. This paper discusses the overall approach to validation testing of the wing box component and presents some preliminary analytical test predictions.
Study on nondestructive discrimination of genuine and counterfeit wild ginsengs using NIRS
NASA Astrophysics Data System (ADS)
Lu, Q.; Fan, Y.; Peng, Z.; Ding, H.; Gao, H.
2012-07-01
A new approach for the nondestructive discrimination between genuine wild ginsengs and the counterfeit ones by near infrared spectroscopy (NIRS) was developed. Both discriminant analysis and back propagation artificial neural network (BP-ANN) were applied to the model establishment for discrimination. Optimal modeling wavelengths were determined based on the anomalous spectral information of counterfeit samples. Through principal component analysis (PCA) of various wild ginseng samples, genuine and counterfeit, the cumulative percentages of variance of the principal components were obtained, serving as a reference for principal component (PC) factor determination. Discriminant analysis achieved an identification ratio of 88.46%. With sample' truth values as its outputs, a three-layer BP-ANN model was built, which yielded a higher discrimination accuracy of 100%. The overall results sufficiently demonstrate that NIRS combined with BP-ANN classification algorithm performs better on ginseng discrimination than discriminant analysis, and can be used as a rapid and nondestructive method for the detection of counterfeit wild ginsengs in food and pharmaceutical industry.
Warren, Ruth M L; Thompson, Deborah; Pointon, Linda J; Hoff, Rebecca; Gilbert, Fiona J; Padhani, Anwar R; Easton, Douglas F; Lakhani, Sunil R; Leach, Martin O
2006-06-01
To evaluate prospectively the accuracy of a lesion classification system designed for use in a magnetic resonance (MR) imaging high-breast-cancer-risk screening study. All participating patients provided written informed consent. Ethics committee approval was obtained. The results of 1541 contrast material-enhanced breast MR imaging examinations were analyzed; 1441 screening examinations were performed in 638 women aged 24-51 years at high risk for breast cancer, and 100 examinations were performed in 100 women aged 23-81 years. Lesion analysis was performed in 991 breasts, which were divided into design (491 breasts) and testing (500 breasts) sets. The reference standard was histologic analysis of biopsy samples, fine-needle aspiration cytology, or minimal follow-up of 24 months. The scoring system involved the use of five features: morphology (MOR), pattern of enhancement (POE), percentage of maximal focal enhancement (PMFE), maximal signal intensity-time ratio (MITR), and pattern of contrast material washout (POCW). The system was evaluated by means of (a) assessment of interreader agreement, as expressed in kappa statistics, for 315 breasts in which both readers analyzed the same lesion, (b) assessment of the diagnostic accuracy of the scored components with receiver operating characteristic curve analysis, and (c) logistic regression analysis to determine which components of the scoring system were critical to the final score. A new simplified scoring system developed with the design set was applied to the testing set. There was moderate reader agreement regarding overall lesion outcome (ie, malignant, suspicious, or benign) (kappa=0.58) and less agreement regarding the scored components. The area under the receiver operating characteristic curve (AUC) for the overall lesion score, 0.88, was higher than the AUC for any one component. The components MOR, POE, and POCW yielded the best overall result. PMFE and MITR did not contribute to diagnostic utility. Applying a simplified scoring system to the testing set yielded a nonsignificantly (P=.2) higher AUC than did applying the original scoring system (sensitivity, 84%; specificity, 86.0%). Good diagnostic accuracy can be achieved by using simple qualitative descriptors of lesion enhancement, including POCW. In the context of screening, quantitative enhancement parameters appear to be less useful for lesion characterization. Copyright (c) RSNA, 2006.
Madeo, Andrea; Piras, Paolo; Re, Federica; Gabriele, Stefano; Nardinocchi, Paola; Teresi, Luciano; Torromeo, Concetta; Chialastri, Claudia; Schiariti, Michele; Giura, Geltrude; Evangelista, Antonietta; Dominici, Tania; Varano, Valerio; Zachara, Elisabetta; Puddu, Paolo Emilio
2015-01-01
The assessment of left ventricular shape changes during cardiac revolution may be a new step in clinical cardiology to ease early diagnosis and treatment. To quantify these changes, only point registration was adopted and neither Generalized Procrustes Analysis nor Principal Component Analysis were applied as we did previously to study a group of healthy subjects. Here, we extend to patients affected by hypertrophic cardiomyopathy the original approach and preliminarily include genotype positive/phenotype negative individuals to explore the potential that incumbent pathology might also be detected. Using 3D Speckle Tracking Echocardiography, we recorded left ventricular shape of 48 healthy subjects, 24 patients affected by hypertrophic cardiomyopathy and 3 genotype positive/phenotype negative individuals. We then applied Generalized Procrustes Analysis and Principal Component Analysis and inter-individual differences were cleaned by Parallel Transport performed on the tangent space, along the horizontal geodesic, between the per-subject consensuses and the grand mean. Endocardial and epicardial layers were evaluated separately, different from many ecocardiographic applications. Under a common Principal Component Analysis, we then evaluated left ventricle morphological changes (at both layers) explained by first Principal Component scores. Trajectories’ shape and orientation were investigated and contrasted. Logistic regression and Receiver Operating Characteristic curves were used to compare these morphometric indicators with traditional 3D Speckle Tracking Echocardiography global parameters. Geometric morphometrics indicators performed better than 3D Speckle Tracking Echocardiography global parameters in recognizing pathology both in systole and diastole. Genotype positive/phenotype negative individuals clustered with patients affected by hypertrophic cardiomyopathy during diastole, suggesting that incumbent pathology may indeed be foreseen by these methods. Left ventricle deformation in patients affected by hypertrophic cardiomyopathy compared to healthy subjects may be assessed by modern shape analysis better than by traditional 3D Speckle Tracking Echocardiography global parameters. Hypertrophic cardiomyopathy pathophysiology was unveiled in a new manner whereby also diastolic phase abnormalities are evident which is more difficult to investigate by traditional ecocardiographic techniques. PMID:25875818
Functional data analysis of sleeping energy expenditure.
Lee, Jong Soo; Zakeri, Issa F; Butte, Nancy F
2017-01-01
Adequate sleep is crucial during childhood for metabolic health, and physical and cognitive development. Inadequate sleep can disrupt metabolic homeostasis and alter sleeping energy expenditure (SEE). Functional data analysis methods were applied to SEE data to elucidate the population structure of SEE and to discriminate SEE between obese and non-obese children. Minute-by-minute SEE in 109 children, ages 5-18, was measured in room respiration calorimeters. A smoothing spline method was applied to the calorimetric data to extract the true smoothing function for each subject. Functional principal component analysis was used to capture the important modes of variation of the functional data and to identify differences in SEE patterns. Combinations of functional principal component analysis and classifier algorithm were used to classify SEE. Smoothing effectively removed instrumentation noise inherent in the room calorimeter data, providing more accurate data for analysis of the dynamics of SEE. SEE exhibited declining but subtly undulating patterns throughout the night. Mean SEE was markedly higher in obese than non-obese children, as expected due to their greater body mass. SEE was higher among the obese than non-obese children (p<0.01); however, the weight-adjusted mean SEE was not statistically different (p>0.1, after post hoc testing). Functional principal component scores for the first two components explained 77.8% of the variance in SEE and also differed between groups (p = 0.037). Logistic regression, support vector machine or random forest classification methods were able to distinguish weight-adjusted SEE between obese and non-obese participants with good classification rates (62-64%). Our results implicate other factors, yet to be uncovered, that affect the weight-adjusted SEE of obese and non-obese children. Functional data analysis revealed differences in the structure of SEE between obese and non-obese children that may contribute to disruption of metabolic homeostasis.
Wavelet analysis applied to the IRAS cirrus
NASA Technical Reports Server (NTRS)
Langer, William D.; Wilson, Robert W.; Anderson, Charles H.
1994-01-01
The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.
Testing for intracycle determinism in pseudoperiodic time series.
Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A
2008-06-01
A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.
NASA Technical Reports Server (NTRS)
Parada, N. D. J.; Novo, E. M. L. M.
1983-01-01
Two sets of MSS/LANDSAT data with solar elevation ranging from 22 deg to 41 deg were used at the Image-100 System to implement the Eliason et alii technique for extracting the topographic modulation component. An unsupervised cluster analysis was used to obtain an average brightness image for each channel. Analysis of the enhanced imaged shows that the technique for extracting topographic modulation component is more appropriated to MSS data obtained under high sun elevation ngles. Low sun elevation increases the variance of each cluster so that the average brightness doesn't represent its albedo proprties. The topographic modulation component applied to low sun elevation angle damages rather than enhance topographic information. Better results were produced for channels 4 and 5 than for channels 6 and 7.
A stable systemic risk ranking in China's banking sector: Based on principal component analysis
NASA Astrophysics Data System (ADS)
Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing
2018-02-01
In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.
Support vector machine based classification of fast Fourier transform spectroscopy of proteins
NASA Astrophysics Data System (ADS)
Lazarevic, Aleksandar; Pokrajac, Dragoljub; Marcano, Aristides; Melikechi, Noureddine
2009-02-01
Fast Fourier transform spectroscopy has proved to be a powerful method for study of the secondary structure of proteins since peak positions and their relative amplitude are affected by the number of hydrogen bridges that sustain this secondary structure. However, to our best knowledge, the method has not been used yet for identification of proteins within a complex matrix like a blood sample. The principal reason is the apparent similarity of protein infrared spectra with actual differences usually masked by the solvent contribution and other interactions. In this paper, we propose a novel machine learning based method that uses protein spectra for classification and identification of such proteins within a given sample. The proposed method uses principal component analysis (PCA) to identify most important linear combinations of original spectral components and then employs support vector machine (SVM) classification model applied on such identified combinations to categorize proteins into one of given groups. Our experiments have been performed on the set of four different proteins, namely: Bovine Serum Albumin, Leptin, Insulin-like Growth Factor 2 and Osteopontin. Our proposed method of applying principal component analysis along with support vector machines exhibits excellent classification accuracy when identifying proteins using their infrared spectra.
Saravanan, V S; Ayessa Idenal, Marissa; Saiyed, Shahin; Saxena, Deepak; Gerke, Solvay
2016-10-01
Diseases are rapidly urbanizing. Ageing infrastructures, high levels of inequality, poor urban governance, rapidly growing economies and highly dense and mobile populations all create environments rife for water-borne diseases. This article analyzes the role of institutions as crosscutting entities among a myriad of factors that breed water-borne diseases in the city of Ahmedabad, India. It applies 'path dependency' and a 'rational choice' perspective to understand the factors facilitating the breeding of diseases. This study is based on household surveys of approximately 327 households in two case study wards and intermittent interviews with key informants over a period of 2 years. Principle component analysis is applied to reduce the data and convert a set of observations, which potentially correlate with each other, into components. Institutional analyses behind these components reveal the role of social actors in exploiting the deeply rooted inefficiencies affecting urban health. This has led to a vicious cycle; breaking this cycle requires understanding the political dynamics that underlie the exposure and prevalence of diseases to improve urban health. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Seismic Analysis of the 2017 Oroville Dam Spillway Erosion Crisis
NASA Astrophysics Data System (ADS)
Goodling, P.; Lekic, V.; Prestegaard, K. L.
2017-12-01
The outflow channel of the northern California (USA) Oroville Dam suffered catastrophic erosion damage in February and March, 2017. High discharges released through the spillway (up to 3,000 m3/s) caused rapid spillway erosion, forming a deep chasm. A repeat LiDAR survey obtained from the California Department of Water Resources indicates that the chasm eroded to a depth of 48 meters. A three-component broadband seismometer (STS-1) operated by the Berkeley Digital Seismological Network recorded microseismic energy produced by the flowing water, providing a natural laboratory to test methods for seismically monitoring sudden catastrophic floods and erosion. In this study, we evaluate the three-component waveforms recorded during five constant-discharge periods - before, during, and after the spillway crisis - each of which had a different channel geometry. We apply frequency-dependent polarization analysis (FDPA; following Park, 1987), which characterizes particle motion at each frequency. The method is based on principal component analysis on a spectral covariance matrix in one-hour windows and it produces the horizontal azimuth, vertical tilt, horizontal phase, and vertical phase of the dominant particle motion. The results indicate a greater vertical component (perhaps roughness-induced) of power at a broad range of frequencies at a given discharge after the formation of the chasm. As the outflow crater developed, the back-azimuth of the primary source of seismic energy changed from the nearby Thermalito Diversion Pool (188 degrees) to the center of the outflow channel (170 degrees). To further analyze FDPA results, we apply the 2D spectral-element solver package SPECFEM2D (Tromp et al. 2008), and find that local topography should be considered when interpreting the surface waveforms predicted by FDPA results. This research suggests that monitoring changing channel geometry and erosion in large-scale flood events may be enhanced by seismic FDPA analysis. The results of this work are compared and contrasted with 3-component seismic observations of cobble-bed stream floods in Maryland.
NASA Astrophysics Data System (ADS)
Li, Zhixiong; Yan, Xinping; Wang, Xuping; Peng, Zhongxiao
2016-06-01
In the complex gear transmission systems, in wind turbines a crack is one of the most common failure modes and can be fatal to the wind turbine power systems. A single sensor may suffer with issues relating to its installation position and direction, resulting in the collection of weak dynamic responses of the cracked gear. A multi-channel sensor system is hence applied in the signal acquisition and the blind source separation (BSS) technologies are employed to optimally process the information collected from multiple sensors. However, literature review finds that most of the BSS based fault detectors did not address the dependence/correlation between different moving components in the gear systems; particularly, the popular used independent component analysis (ICA) assumes mutual independence of different vibration sources. The fault detection performance may be significantly influenced by the dependence/correlation between vibration sources. In order to address this issue, this paper presents a new method based on the supervised order tracking bounded component analysis (SOTBCA) for gear crack detection in wind turbines. The bounded component analysis (BCA) is a state of art technology for dependent source separation and is applied limitedly to communication signals. To make it applicable for vibration analysis, in this work, the order tracking has been appropriately incorporated into the BCA framework to eliminate the noise and disturbance signal components. Then an autoregressive (AR) model built with prior knowledge about the crack fault is employed to supervise the reconstruction of the crack vibration source signature. The SOTBCA only outputs one source signal that has the closest distance with the AR model. Owing to the dependence tolerance ability of the BCA framework, interfering vibration sources that are dependent/correlated with the crack vibration source could be recognized by the SOTBCA, and hence, only useful fault information could be preserved in the reconstructed signal. The crack failure thus could be precisely identified by the cyclic spectral correlation analysis. A series of numerical simulations and experimental tests have been conducted to illustrate the advantages of the proposed SOTBCA method for fatigue crack detection. Comparisons to three representative techniques, i.e. Erdogan's BCA (E-BCA), joint approximate diagonalization of eigen-matrices (JADE), and FastICA, have demonstrated the effectiveness of the SOTBCA. Hence the proposed approach is suitable for accurate gear crack detection in practical applications.
Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin
2016-11-01
We developed a novel quantitative analysis method based on ultra high performance liquid chromatography coupled with diode array detection for the simultaneous determination of the 14 main active components in Yinchenhao decoction. All components were separated on an Agilent SB-C18 column by using a gradient solvent system of acetonitrile/0.1% phosphoric acid solution at a flow rate of 0.4 mL/min for 35 min. Subsequently, linearity, precision, repeatability, and accuracy tests were implemented to validate the method. Furthermore, the method has been applied for compositional difference analysis of 14 components in eight normal-extraction Yinchenhao decoction samples, accompanied by hierarchical clustering analysis and similarity analysis. The result that all samples were divided into three groups based on different contents of components demonstrated that extraction methods of decocting, refluxing, ultrasonication and extraction solvents of water or ethanol affected component differentiation, and should be related to its clinical applications. The results also indicated that the sample prepared by patients in the family by using water extraction employing a casserole was almost same to that prepared using a stainless-steel kettle, which is mostly used in pharmaceutical factories. This research would help patients to select the best and most convenient method for preparing Yinchenhao decoction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Steinhauser, Marco; Hubner, Ronald
2009-01-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang
2015-01-01
Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999
NASA Astrophysics Data System (ADS)
Vítková, Gabriela; Prokeš, Lubomír; Novotný, Karel; Pořízka, Pavel; Novotný, Jan; Všianský, Dalibor; Čelko, Ladislav; Kaiser, Jozef
2014-11-01
Focusing on historical aspect, during archeological excavation or restoration works of buildings or different structures built from bricks it is important to determine, preferably in-situ and in real-time, the locality of bricks origin. Fast classification of bricks on the base of Laser-Induced Breakdown Spectroscopy (LIBS) spectra is possible using multivariate statistical methods. Combination of principal component analysis (PCA) and linear discriminant analysis (LDA) was applied in this case. LIBS was used to classify altogether the 29 brick samples from 7 different localities. Realizing comparative study using two different LIBS setups - stand-off and table-top it is shown that stand-off LIBS has a big potential for archeological in-field measurements.
Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J
2003-09-01
As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.
"Dateline NBC"'s Persuasive Attack on Wal-Mart.
ERIC Educational Resources Information Center
Benoit, William L.; Dorries, Bruce
1996-01-01
Develops a typology of persuasive attack strategies. Identifies two key components of persuasive attack: responsibility and offensiveness. Describes several strategies for intensifying each of these elements. Applies this analysis to "Dateline NBC"'s allegations that Wal-Mart's "Buy American" campaign was deceptive. Concludes…
Analytic Shielding Optimization to Reduce Crew Exposure to Ionizing Radiation Inside Space Vehicles
NASA Technical Reports Server (NTRS)
Gaza, Razvan; Cooper, Tim P.; Hanzo, Arthur; Hussein, Hesham; Jarvis, Kandy S.; Kimble, Ryan; Lee, Kerry T.; Patel, Chirag; Reddell, Brandon D.; Stoffle, Nicholas;
2009-01-01
A sustainable lunar architecture provides capabilities for leveraging out-of-service components for alternate uses. Discarded architecture elements may be used to provide ionizing radiation shielding to the crew habitat in case of a Solar Particle Event. The specific location relative to the vehicle where the additional shielding mass is placed, as corroborated with particularities of the vehicle design, has a large influence on protection gain. This effect is caused by the exponential- like decrease of radiation exposure with shielding mass thickness, which in turn determines that the most benefit from a given amount of shielding mass is obtained by placing it so that it preferentially augments protection in under-shielded areas of the vehicle exposed to the radiation environment. A novel analytic technique to derive an optimal shielding configuration was developed by Lockheed Martin during Design Analysis Cycle 3 (DAC-3) of the Orion Crew Exploration Vehicle (CEV). [1] Based on a detailed Computer Aided Design (CAD) model of the vehicle including a specific crew positioning scenario, a set of under-shielded vehicle regions can be identified as candidates for placement of additional shielding. Analytic tools are available to allow capturing an idealized supplemental shielding distribution in the CAD environment, which in turn is used as a reference for deriving a realistic shielding configuration from available vehicle components. While the analysis referenced in this communication applies particularly to the Orion vehicle, the general method can be applied to a large range of space exploration vehicles, including but not limited to lunar and Mars architecture components. In addition, the method can be immediately applied for optimization of radiation shielding provided to sensitive electronic components.
Technical Note: Independent component analysis for quality assurance in functional MRI.
Astrakas, Loukas G; Kallistis, Nikolaos S; Kalef-Ezra, John A
2016-02-01
Independent component analysis (ICA) is an established method of analyzing human functional MRI (fMRI) data. Here, an ICA-based fMRI quality control (QC) tool was developed and used. ICA-based fMRI QC tool to be used with a commercial phantom was developed. In an attempt to assess the performance of the tool relative to preexisting alternative tools, it was used seven weeks before and eight weeks after repair of a faulty gradient amplifier of a non-state-of-the-art MRI unit. More specifically, its performance was compared with the AAPM 100 acceptance testing and quality assurance protocol and two fMRI QC protocols, proposed by Freidman et al. ["Report on a multicenter fMRI quality assurance protocol," J. Magn. Reson. Imaging 23, 827-839 (2006)] and Stocker et al. ["Automated quality assurance routines for fMRI data applied to a multicenter study," Hum. Brain Mapp. 25, 237-246 (2005)], respectively. The easily developed and applied ICA-based QC protocol provided fMRI QC indices and maps equally sensitive to fMRI instabilities with the indices and maps of other established protocols. The ICA fMRI QC indices were highly correlated with indices of other fMRI QC protocols and in some cases theoretically related to them. Three or four independent components with slow varying time series are detected under normal conditions. ICA applied on phantom measurements is an easy and efficient tool for fMRI QC. Additionally, it can protect against misinterpretations of artifact components as human brain activations. Evaluating fMRI QC indices in the central region of a phantom is not always the optimal choice.
Application of multivariable statistical techniques in plant-wide WWTP control strategies analysis.
Flores, X; Comas, J; Roda, I R; Jiménez, L; Gernaey, K V
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.
[Study of beta-turns in globular proteins].
Amirova, S R; Milchevskiĭ, Iu V; Filatov, I V; Esipova, N G; Tumanian, V G
2005-01-01
The formation of beta-turns in globular proteins has been studied by the method of molecular mechanics. Statistical method of discriminant analysis was applied to calculate energy components and sequences of oligopeptide segments, and after this prediction of I type beta-turns has been drawn. The accuracy of true positive prediction is 65%. Components of conformational energy considerably affecting beta-turn formation were delineated. There are torsional energy, energy of hydrogen bonds, and van der Waals energy.
Discrete Event Simulation Modeling and Analysis of Key Leader Engagements
2012-06-01
to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing
Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch
2014-03-01
Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
NASA Astrophysics Data System (ADS)
Zhao, Yan-Ru; Yu, Ke-Qiang; Li, Xiaoli; He, Yong
2016-12-01
Infected petals are often regarded as the source for the spread of fungi Sclerotinia sclerotiorum in all growing process of rapeseed (Brassica napus L.) plants. This research aimed to detect fungal infection of rapeseed petals by applying hyperspectral imaging in the spectral region of 874-1734 nm coupled with chemometrics. Reflectance was extracted from regions of interest (ROIs) in the hyperspectral image of each sample. Firstly, principal component analysis (PCA) was applied to conduct a cluster analysis with the first several principal components (PCs). Then, two methods including X-loadings of PCA and random frog (RF) algorithm were used and compared for optimizing wavebands selection. Least squares-support vector machine (LS-SVM) methodology was employed to establish discriminative models based on the optimal and full wavebands. Finally, area under the receiver operating characteristics curve (AUC) was utilized to evaluate classification performance of these LS-SVM models. It was found that LS-SVM based on the combination of all optimal wavebands had the best performance with AUC of 0.929. These results were promising and demonstrated the potential of applying hyperspectral imaging in fungus infection detection on rapeseed petals.
Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie
2014-01-01
Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.
Dai, Wensheng
2014-01-01
Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740
Real-space analysis of radiation-induced specific changes with independent component analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borek, Dominika; Bromberg, Raquel; Hattne, Johan
A method of analysis is presented that allows for the separation of specific radiation-induced changes into distinct components in real space. The method relies on independent component analysis (ICA) and can be effectively applied to electron density maps and other types of maps, provided that they can be represented as sets of numbers on a grid. Here, for glucose isomerase crystals, ICA was used in a proof-of-concept analysis to separate temperature-dependent and temperature-independent components of specific radiation-induced changes for data sets acquired from multiple crystals across multiple temperatures. ICA identified two components, with the temperature-independent component being responsible for themore » majority of specific radiation-induced changes at temperatures below 130 K. The patterns of specific temperature-independent radiation-induced changes suggest a contribution from the tunnelling of electron holes as a possible explanation. In the second case, where a group of 22 data sets was collected on a single thaumatin crystal, ICA was used in another type of analysis to separate specific radiation-induced effects happening on different exposure-level scales. Here, ICA identified two components of specific radiation-induced changes that likely result from radiation-induced chemical reactions progressing with different rates at different locations in the structure. In addition, ICA unexpectedly identified the radiation-damage state corresponding to reduced disulfide bridges rather than the zero-dose extrapolated state as the highest contrast structure. The application of ICA to the analysis of specific radiation-induced changes in real space and the data pre-processing for ICA that relies on singular value decomposition, which was used previously in data space to validate a two-component physical model of X-ray radiation-induced changes, are discussed in detail. This work lays a foundation for a better understanding of protein-specific radiation chemistries and provides a framework for analysing effects of specific radiation damage in crystallographic and cryo-EM experiments.« less
DREEM on: validation of the Dundee Ready Education Environment Measure in Pakistan.
Khan, Junaid Sarfraz; Tabasum, Saima; Yousafzai, Usman Khalil; Fatima, Mehreen
2011-09-01
To validate DREEM in medical education environment of Punjab, Pakistan. The DREEM questionnaire was anonymously collected from Final year Baccalaureate of Medicine; Baccalaureate of Surgery students in the private and public medical colleges affiliated with the University of Health Sciences, Lahore. Data was analyzed using Principal Component Analysis with Varimax Rotation. The response rate was 84.14 %. The average DREEM score was 125. Confirmatory and Exploratory Factor Analysis was applied under the conditions of eigenvalues >1 and loadings > or = 0.3. In CONFIRMATORY FACTOR ANALYSIS, Five components were extracted accounting for 40.10% of variance and in EXPLORATORY FACTOR ANALYSIS, Ten components were extracted accounting for 52.33% of variance. Total 50 items had internal consistency reliability of 0.91 (Cronbach's Alpha). The value of Spearman-Brown was 0.868 showing the reliability of the analysis. In both analyses the subscales produced were sensible but the mismatch from the original was largely due to the English-Pakistan contextual and cultural differences. DREEM is a generic instrument that will do well with regional modifications to suit individual, contextual and cultural settings.
Perturbation analyses of intermolecular interactions
NASA Astrophysics Data System (ADS)
Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.
2011-08-01
Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.
Perturbation analyses of intermolecular interactions.
Koyama, Yohei M; Kobayashi, Tetsuya J; Ueda, Hiroki R
2011-08-01
Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.
NASA Astrophysics Data System (ADS)
Farsadnia, Farhad; Ghahreman, Bijan
2016-04-01
Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.
Han, Sheng-Nan
2014-07-01
Chemometrics is a new branch of chemistry which is widely applied to various fields of analytical chemistry. Chemometrics can use theories and methods of mathematics, statistics, computer science and other related disciplines to optimize the chemical measurement process and maximize access to acquire chemical information and other information on material systems by analyzing chemical measurement data. In recent years, traditional Chinese medicine has attracted widespread attention. In the research of traditional Chinese medicine, it has been a key problem that how to interpret the relationship between various chemical components and its efficacy, which seriously restricts the modernization of Chinese medicine. As chemometrics brings the multivariate analysis methods into the chemical research, it has been applied as an effective research tool in the composition-activity relationship research of Chinese medicine. This article reviews the applications of chemometrics methods in the composition-activity relationship research in recent years. The applications of multivariate statistical analysis methods (such as regression analysis, correlation analysis, principal component analysis, etc. ) and artificial neural network (such as back propagation artificial neural network, radical basis function neural network, support vector machine, etc. ) are summarized, including the brief fundamental principles, the research contents and the advantages and disadvantages. Finally, the existing main problems and prospects of its future researches are proposed.
Dong, Fengxia; Mitchell, Paul D; Colquhoun, Jed
2015-01-01
Measuring farm sustainability performance is a crucial component for improving agricultural sustainability. While extensive assessments and indicators exist that reflect the different facets of agricultural sustainability, because of the relatively large number of measures and interactions among them, a composite indicator that integrates and aggregates over all variables is particularly useful. This paper describes and empirically evaluates a method for constructing a composite sustainability indicator that individually scores and ranks farm sustainability performance. The method first uses non-negative polychoric principal component analysis to reduce the number of variables, to remove correlation among variables and to transform categorical variables to continuous variables. Next the method applies common-weight data envelope analysis to these principal components to individually score each farm. The method solves weights endogenously and allows identifying important practices in sustainability evaluation. An empirical application to Wisconsin cranberry farms finds heterogeneity in sustainability practice adoption, implying that some farms could adopt relevant practices to improve the overall sustainability performance of the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantification of acidic compounds in complex biomass-derived streams
Karp, Eric M.; Nimlos, Claire T.; Deutch, Steve; ...
2016-05-10
Biomass-derived streams that contain acidic compounds from the degradation of lignin and polysaccharides (e.g. black liquor, pyrolysis oil, pyrolytic lignin, etc.) are chemically complex solutions prone to instability and degradation during analysis, making quantification of compounds within them challenging. Here we present a robust analytical method to quantify acidic compounds in complex biomass-derived mixtures using ion exchange, sample reconstitution in pyridine and derivatization with BSTFA. The procedure is based on an earlier method originally reported for kraft black liquors and, in this work, is applied to identify and quantify a large slate of acidic compounds in corn stover derived alkalinemore » pretreatment liquor (APL) as a function of pretreatment severity. Analysis of the samples is conducted with GCxGC-TOFMS to achieve good resolution of the components within the complex mixture. The results reveal the dominant low molecular weight components and their concentrations as a function of pretreatment severity. Application of this method is also demonstrated in the context of lignin conversion technologies by applying it to track the microbial conversion of an APL substrate. Here as well excellent results are achieved, and the appearance and disappearance of compounds is observed in agreement with the known metabolic pathways of two bacteria, indicating the sample integrity was maintained throughout analysis. Finally, it is shown that this method applies more generally to lignin-rich materials by demonstrating its usefulness in analysis of pyrolysis oil and pyrolytic lignin.« less
NASA Astrophysics Data System (ADS)
Geminale, A.; Grassi, D.; Altieri, F.; Serventi, G.; Carli, C.; Carrozzo, F. G.; Sgavetti, M.; Orosei, R.; D'Aversa, E.; Bellucci, G.; Frigeri, A.
2015-06-01
The aim of this work is to extract the surface contribution in the martian visible/near-infrared spectra removing the atmospheric components by means of Principal Component Analysis (PCA) and target transformation (TT). The developed technique is suitable for separating spectral components in a data set large enough to enable an effective usage of statistical methods, in support to the more common approaches to remove the gaseous component. In this context, a key role is played by the estimation, from the spectral population, of the covariance matrix that describes the statistical correlation of the signal among different points in the spectrum. As a general rule, the covariance matrix becomes more and more meaningful increasing the size of initial population, justifying therefore the importance of sizable datasets. Data collected by imaging spectrometers, such as the OMEGA (Observatoire pour la Minéralogie, l'Eau, les Glaces et l'Activité) instrument on board the ESA mission Mars Express (MEx), are particularly suitable for this purpose since it includes in the same session of observation a large number of spectra with different content of aerosols, gases and mineralogy. The methodology presented in this work has been first validated using a simulated dataset of spectra to evaluate its accuracy. Then, it has been applied to the analysis of OMEGA sessions over Nili Fossae and Mawrth Vallis regions, which have been already widely studied because of the presence of hydrated minerals. These minerals are key components of the surface to investigate the presence of liquid water flowing on the martian surface in the Noachian period. Moreover, since a correction for the atmospheric aerosols (dust) component is also applied to these observations, the present work is able to completely remove the atmospheric contribution from the analysed spectra. Once the surface reflectance, free from atmospheric contributions, has been obtained, the Modified Gaussian Model (MGM) has been applied to spectra showing the hydrated phase. Silicates and iron-bearing hydrated minerals have been identified by means of the electronic transitions of Fe2+ between 0.8 and 1.2 μm, while at longer wavelengths the hydrated mineralogy is identified by overtones of the OH group. Surface reflectance spectra, as derived through the method discussed in this paper, clearly show a lower level of the atmospheric residuals in the 1.9 hydration band, thus resulting in a better match with the MGM deconvolution parameters found for the laboratory spectra of martian hydrated mineral analogues and allowing a deeper investigation of this spectral range.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba
2003-01-01
A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.
NASA Technical Reports Server (NTRS)
Gao, Shou-Ting; Ping, Fan; Li, Xiao-Fan; Tao, Wei-Kuo
2004-01-01
Although dry/moist potential vorticity is a useful physical quantity for meteorological analysis, it cannot be applied to the analysis of 2D simulations. A convective vorticity vector (CVV) is introduced in this study to analyze 2D cloud-resolving simulation data associated with 2D tropical convection. The cloud model is forced by the vertical velocity, zonal wind, horizontal advection, and sea surface temperature obtained from the TOGA COARE, and is integrated for a selected 10-day period. The CVV has zonal and vertical components in the 2D x-z frame. Analysis of zonally-averaged and mass-integrated quantities shows that the correlation coefficient between the vertical component of the CVV and the sum of the cloud hydrometeor mixing ratios is 0.81, whereas the correlation coefficient between the zonal component and the sum of the mixing ratios is only 0.18. This indicates that the vertical component of the CVV is closely associated with tropical convection. The tendency equation for the vertical component of the CVV is derived and the zonally-averaged and mass-integrated tendency budgets are analyzed. The tendency of the vertical component of the CVV is determined by the interaction between the vorticity and the zonal gradient of cloud heating. The results demonstrate that the vertical component of the CVV is a cloud-linked parameter and can be used to study tropical convection.
NASA Astrophysics Data System (ADS)
Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.
2017-12-01
This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.
Dawidowicz, Andrzej L; Czapczyńska, Natalia B; Wianowska, Dorota
2012-05-30
The influence of different Purge Times on the effectiveness of Pressurized Liquid Extraction (PLE) of volatile oil components from cypress plant matrix (Cupressus sempervirens) was investigated, applying solvents of diverse extraction efficiencies. The obtained results show the decrease of the mass yields of essential oil components as a result of increased Purge Time. The loss of extracted components depends on the extrahent type - the greatest mass yield loss occurred in the case of non-polar solvents, whereas the smallest was found in polar extracts. Comparisons of the PLE method with Sea Sand Disruption Method (SSDM), Matrix Solid-Phase Dispersion Method (MSPD) and Steam Distillation (SD) were performed to assess the method's accuracy. Independent of the solvent and Purge Time applied in the PLE process, the total mass yield was lower than the one obtained for simple, short and relatively cheap low-temperature matrix disruption procedures - MSPD and SSDM. Thus, in the case of volatile oils analysis, the application of these methods is advisable. Copyright © 2012 Elsevier B.V. All rights reserved.
Summary of engine design and analytical studies to mature the 1137400E engine baseline
NASA Technical Reports Server (NTRS)
Kleinert, D. E.; Lester, W. A.
1972-01-01
Activities in packaging components into integral module arrangements compatible with engine design requirements for the 1137400E flight engine baseline are summarized along with the applied mechanics and thermal analysis. Revisions to drawings, configurations, and support structures are discussed.
Lu, Chi-Jie; Chang, Chi-Chang
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting. PMID:25045738
Escudero, Javier; Hornero, Roberto; Abásolo, Daniel; Fernández, Alberto; Poza, Jesús
2007-01-01
The aim of this study was to improve the diagnosis of Alzheimer's disease (AD) patients applying a blind source separation (BSS) and component selection procedure to their magnetoencephalogram (MEG) recordings. MEGs from 18 AD patients and 18 control subjects were decomposed with the algorithm for multiple unknown signals extraction. MEG channels and components were characterized by their mean frequency, spectral entropy, approximate entropy, and Lempel-Ziv complexity. Using Student's t-test, the components which accounted for the most significant differences between groups were selected. Then, these relevant components were used to partially reconstruct the MEG channels. By means of a linear discriminant analysis, we found that the BSS-preprocessed MEGs classified the subjects with an accuracy of 80.6%, whereas 72.2% accuracy was obtained without the BSS and component selection procedure.
Centrality Evolution of pt and yt Spectra from Au-Au Collisions at √ {sNN} = 200 GeV
NASA Astrophysics Data System (ADS)
Trainor, Thomas A.
A two-component analysis of spectra to pt = 12 GeV/c for identified pions and protons from 200 GeV Au-Au collisions is presented. The method is similar to an analysis of the nch dependence of pt spectra from p-p collisions at 200 GeV, but applied to Au-Au centrality dependence. The soft-component reference is a Lévy distribution on transverse mass mt. The hard-component reference is a Gaussian on transverse rapidity yt with exponential (pt power-law) tail. Deviations of data from the reference are described by hard-component ratio rAA, which generalizes nuclear modification factor RAA. The analysis suggests that centrality evolution of pion and proton spectra is dominated by changes in parton fragmentation. The structure of rAA suggests that parton energy loss produces a negative boost Δyt of a large fraction (but not all) of the minimum-bias fragment distribution, and that lower-energy partons suffer relatively less energy loss, possibly due to color screening. The analysis also suggests that the anomalous p/π ratio may be due to differences in the parton energy-loss process experienced by the two hadron species. This analysis provides no evidence for radial flow.
The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Gunawan, Insan; Taufik, Ahmad
2007-10-01
There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.
Widjaja, Effendi; Tan, Boon Hong; Garland, Marc
2006-03-01
Two-dimensional (2D) correlation spectroscopy has been extensively applied to analyze various vibrational spectroscopic data, especially infrared and Raman. However, when it is applied to real-world experimental data, which often contains various imperfections (such as noise interference, baseline fluctuations, and band-shifting) and highly overlapping bands, many artifacts and misleading features in synchronous and asynchronous maps will emerge, and this will lead to difficulties with interpretation. Therefore, an approach that counters many artifacts and therefore leads to simplified interpretation of 2D correlation analysis is certainly useful. In the present contribution, band-target entropy minimization (BTEM) is employed as a spectral pretreatment to handle many of the artifact problems before the application of 2D correlation analysis. BTEM is employed to elucidate the pure component spectra of mixtures and their corresponding concentration profiles. Two alternate forms of analysis result. In the first, the normally vxv problem is converted to an equivalent nvxnv problem, where n represents the number of species present. In the second, the pure component spectra are transformed into simple distributions, and an equivalent and less computationally intensive nv'xnv' problem results (v'
Advanced methods in NDE using machine learning approaches
NASA Astrophysics Data System (ADS)
Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank
2018-04-01
Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.
Transforming Graph Data for Statistical Relational Learning
2012-10-01
Jordan, 2003), PLSA (Hofmann, 1999), ? Classification via RMN (Taskar et al., 2003) or SVM (Hasan, Chaoji, Salem , & Zaki, 2006) ? Hierarchical...dimensionality reduction methods such as Principal 407 Rossi, McDowell, Aha, & Neville Component Analysis (PCA), Principal Factor Analysis ( PFA ), and...clustering algorithm. Journal of the Royal Statistical Society. Series C, Applied statistics, 28, 100–108. Hasan, M. A., Chaoji, V., Salem , S., & Zaki, M
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth; Kim, Hak
2014-01-01
An informative session regarding SRAM FPGA basics. Presenting a framework for fault injection techniques applied to Xilinx Field Programmable Gate Arrays (FPGAs). Introduce an overlooked time component that illustrates fault injection is impractical for most real designs as a stand-alone characterization tool. Demonstrate procedures that benefit from fault injection error analysis.
NASA Astrophysics Data System (ADS)
Pujiwati, Arie; Nakamura, K.; Watanabe, N.; Komai, T.
2018-02-01
Multivariate analysis is applied to investigate geochemistry of several trace elements in top soils and their relation with the contamination source as the influence of coal mines in Jorong, South Kalimantan. Total concentration of Cd, V, Co, Ni, Cr, Zn, As, Pb, Sb, Cu and Ba was determined in 20 soil samples by the bulk analysis. Pearson correlation is applied to specify the linear correlation among the elements. Principal Component Analysis (PCA) and Cluster Analysis (CA) were applied to observe the classification of trace elements and contamination sources. The results suggest that contamination loading is contributed by Cr, Cu, Ni, Zn, As, and Pb. The elemental loading mostly affects the non-coal mining area, for instances the area near settlement and agricultural land use. Moreover, the contamination source is classified into the areas that are influenced by the coal mining activity, the agricultural types, and the river mixing zone. Multivariate analysis could elucidate the elemental loading and the contamination sources of trace elements in the vicinity of coal mine area.
NASA Astrophysics Data System (ADS)
Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.
2000-08-01
Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.
Sciutto, Giorgia; Oliveri, Paolo; Catelli, Emilio; Bonacini, Irene
2017-01-01
In the field of applied researches in heritage science, the use of multivariate approach is still quite limited and often chemometric results obtained are often underinterpreted. Within this scenario, the present paper is aimed at disseminating the use of suitable multivariate methodologies and proposes a procedural workflow applied on a representative group of case studies, of considerable importance for conservation purposes, as a sort of guideline on the processing and on the interpretation of this FTIR data. Initially, principal component analysis (PCA) is performed and the score values are converted into chemical maps. Successively, the brushing approach is applied, demonstrating its usefulness for a deep understanding of the relationships between the multivariate map and PC score space, as well as for the identification of the spectral bands mainly involved in the definition of each area localised within the score maps. PMID:29333162
Product Quality Improvement Using FMEA for Electric Parking Brake (EPB)
NASA Astrophysics Data System (ADS)
Dumitrescu, C. D.; Gruber, G. C.; Tişcă, I. A.
2016-08-01
One of the most frequently used methods to improve product quality is complex FMEA. (Failure Modes and Effects Analyses). In the literature various FMEA is known, depending on the mode and depending on the targets; we mention here some of these names: Failure Modes and Effects Analysis Process, or analysis Failure Mode and Effects Reported (FMECA). Whatever option is supported by the work team, the goal of the method is the same: optimize product design activities in research, design processes, implementation of manufacturing processes, optimization of mining product to beneficiaries. According to a market survey conducted on parts suppliers to vehicle manufacturers FMEA method is used in 75%. One purpose of the application is that after the research and product development is considered resolved, any errors which may be detected; another purpose of applying the method is initiating appropriate measures to avoid mistakes. Achieving these two goals leads to a high level distribution in applying, to avoid errors already in the design phase of the product, thereby avoiding the emergence and development of additional costs in later stages of product manufacturing. During application of FMEA method using standardized forms; with their help will establish the initial assemblies of product structure, in which all components will be viewed without error. The work is an application of the method FMEA quality components to optimize the structure of the electrical parking brake (Electric Parching Brake - E.P.B). This is a component attached to the roller system which ensures automotive replacement of conventional mechanical parking brake while ensuring its comfort, functionality, durability and saves space in the passenger compartment. The paper describes the levels at which they appealed in applying FMEA, working arrangements in the 4 distinct levels of analysis, and how to determine the number of risk (Risk Priority Number); the analysis of risk factors and established authors who have imposed measures to reduce / eliminate risk completely exploiting this complex product.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
From measurements to metrics: PCA-based indicators of cyber anomaly
NASA Astrophysics Data System (ADS)
Ahmed, Farid; Johnson, Tommy; Tsui, Sonia
2012-06-01
We present a framework of the application of Principal Component Analysis (PCA) to automatically obtain meaningful metrics from intrusion detection measurements. In particular, we report the progress made in applying PCA to analyze the behavioral measurements of malware and provide some preliminary results in selecting dominant attributes from an arbitrary number of malware attributes. The results will be useful in formulating an optimal detection threshold in the principal component space, which can both validate and augment existing malware classifiers.
Electromagnetic field scattering by a triangular aperture.
Harrison, R E; Hyman, E
1979-03-15
The multiple Laplace transform has been applied to analysis and computation of scattering by a double triangular aperture. Results are obtained which match far-field intensity distributions observed in experiments. Arbitrary polarization components, as well as in-phase and quadrature-phase components, may be determined, in the transform domain, as a continuous function of distance from near to far-field for any orientation, aperture, and transformable waveform. Numerical results are obtained by application of numerical multiple inversions of the fully transformed solution.
Binding Isotherms and Time Courses Readily from Magnetic Resonance.
Xu, Jia; Van Doren, Steven R
2016-08-16
Evidence is presented that binding isotherms, simple or biphasic, can be extracted directly from noninterpreted, complex 2D NMR spectra using principal component analysis (PCA) to reveal the largest trend(s) across the series. This approach renders peak picking unnecessary for tracking population changes. In 1:1 binding, the first principal component captures the binding isotherm from NMR-detected titrations in fast, slow, and even intermediate and mixed exchange regimes, as illustrated for phospholigand associations with proteins. Although the sigmoidal shifts and line broadening of intermediate exchange distorts binding isotherms constructed conventionally, applying PCA directly to these spectra along with Pareto scaling overcomes the distortion. Applying PCA to time-domain NMR data also yields binding isotherms from titrations in fast or slow exchange. The algorithm readily extracts from magnetic resonance imaging movie time courses such as breathing and heart rate in chest imaging. Similarly, two-step binding processes detected by NMR are easily captured by principal components 1 and 2. PCA obviates the customary focus on specific peaks or regions of images. Applying it directly to a series of complex data will easily delineate binding isotherms, equilibrium shifts, and time courses of reactions or fluctuations.
Stakeholder Perceptions of Cyberbullying Cases: Application of the Uniform Definition of Bullying.
Moreno, Megan A; Suthamjariya, Nina; Selkie, Ellen
2018-04-01
The Uniform Definition of Bullying was developed to address bullying and cyberbullying, and to promote consistency in measurement and policy. The purpose of this study was to understand community stakeholder perceptions of typical cyberbullying cases, and to evaluate how these case descriptions align with the Uniform Definition. In this qualitative case analysis we recruited stakeholders commonly involved in cyberbullying. We used purposeful sampling to identify and recruit adolescents and young adults, parents, and professionals representing education and health care. Participants were asked to write a typical case of cyberbullying and descriptors in the context of a group discussion. We applied content analysis to case excerpts using inductive and deductive approaches, and chi-squared tests for mixed methods analyses. A total of 68 participants contributed; participants included 73% adults and 27% adolescents and young adults. A total of 650 excerpts were coded from participants' example cases and 362 (55.6%) were consistent with components of the Uniform Definition. The most frequently mentioned component of the Uniform Definition was Aggressive Behavior (n = 218 excerpts), whereas Repeated was mentioned infrequently (n = 19). Most participants included two to three components of the Uniform Definition within an example case; none of the example cases included all components of the Uniform Definition. We found that most participants described cyberbullying cases using few components of the Uniform Definition. Findings can be applied toward considering refinement of the Uniform Definition to ensure stakeholders find it applicable to cyberbullying. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
An evaluation of independent component analyses with an application to resting-state fMRI
Matteson, David S.; Ruppert, David; Eloyan, Ani; Caffo, Brian S.
2013-01-01
Summary We examine differences between independent component analyses (ICAs) arising from different as-sumptions, measures of dependence, and starting points of the algorithms. ICA is a popular method with diverse applications including artifact removal in electrophysiology data, feature extraction in microarray data, and identifying brain networks in functional magnetic resonance imaging (fMRI). ICA can be viewed as a generalization of principal component analysis (PCA) that takes into account higher-order cross-correlations. Whereas the PCA solution is unique, there are many ICA methods–whose solutions may differ. Infomax, FastICA, and JADE are commonly applied to fMRI studies, with FastICA being arguably the most popular. Hastie and Tibshirani (2003) demonstrated that ProDenICA outperformed FastICA in simulations with two components. We introduce the application of ProDenICA to simulations with more components and to fMRI data. ProDenICA was more accurate in simulations, and we identified differences between biologically meaningful ICs from ProDenICA versus other methods in the fMRI analysis. ICA methods require nonconvex optimization, yet current practices do not recognize the importance of, nor adequately address sensitivity to, initial values. We found that local optima led to dramatically different estimates in both simulations and group ICA of fMRI, and we provide evidence that the global optimum from ProDenICA is the best estimate. We applied a modification of the Hungarian (Kuhn-Munkres) algorithm to match ICs from multiple estimates, thereby gaining novel insights into how brain networks vary in their sensitivity to initial values and ICA method. PMID:24350655
Time series analysis of ozone data in Isfahan
NASA Astrophysics Data System (ADS)
Omidvari, M.; Hassanzadeh, S.; Hosseinibalam, F.
2008-07-01
Time series analysis used to investigate the stratospheric ozone formation and decomposition processes. Different time series methods are applied to detect the reason for extreme high ozone concentrations for each season. Data was convert into seasonal component and frequency domain, the latter has been evaluated by using the Fast Fourier Transform (FFT), spectral analysis. The power density spectrum estimated from the ozone data showed peaks at cycle duration of 22, 20, 36, 186, 365 and 40 days. According to seasonal component analysis most fluctuation was in 1999 and 2000, but the least fluctuation was in 2003. The best correlation between ozone and sun radiation was found in 2000. Other variables which are not available cause to this fluctuation in the 1999 and 2001. The trend of ozone is increasing in 1999 and is decreasing in other years.
Comparative study of human blood Raman spectra and biochemical analysis of patients with cancer
NASA Astrophysics Data System (ADS)
Shamina, Lyudmila A.; Bratchenko, Ivan A.; Artemyev, Dmitry N.; Myakinin, Oleg O.; Moryatov, Alexander A.; Orlov, Andrey E.; Kozlov, Sergey V.; Zakharov, Valery P.
2018-04-01
In this study we measured spectral features of blood by Raman spectroscopy. Correlation of the obtained spectral data and biochemical studies results is investigated. Analysis of specific spectra allows for identification of informative spectral bands proportional to components whose content is associated with body fluids homeostasis changes at various pathological conditions. Regression analysis of the obtained spectral data allows for discriminating the lung cancer from other tumors with a posteriori probability of 88.3%. The potentiality of applying surface-enhanced Raman spectroscopy with utilized experimental setup for further studies of the body fluids component composition was estimated. The greatest signal amplification was achieved for the gold substrate with a surface roughness of 1 μm. In general, the developed approach of body fluids analysis provides the basis of a useful and minimally invasive method of pathologies screening.
NASA Astrophysics Data System (ADS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; McCleary, S. L.
1991-05-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
NASA Technical Reports Server (NTRS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; Mccleary, S. L.
1991-01-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkinson, V.K.; Young, J.M.
1995-07-01
The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less
Fractal analysis of scatter imaging signatures to distinguish breast pathologies
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.
2013-02-01
Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.
NASA Astrophysics Data System (ADS)
Ye, M.; Pacheco Castro, R. B.; Pacheco Avila, J.; Cabrera Sansores, A.
2014-12-01
The karstic aquifer of Yucatan is a vulnerable and complex system. The first fifteen meters of this aquifer have been polluted, due to this the protection of this resource is important because is the only source of potable water of the entire State. Through the assessment of groundwater quality we can gain some knowledge about the main processes governing water chemistry as well as spatial patterns which are important to establish protection zones. In this work multivariate statistical techniques are used to assess the groundwater quality of the supply wells (30 to 40 meters deep) in the hidrogeologic region of the Ring of Cenotes, located in Yucatan, Mexico. Cluster analysis and principal component analysis are applied in groundwater chemistry data of the study area. Results of principal component analysis show that the main sources of variation in the data are due sea water intrusion and the interaction of the water with the carbonate rocks of the system and some pollution processes. The cluster analysis shows that the data can be divided in four clusters. The spatial distribution of the clusters seems to be random, but is consistent with sea water intrusion and pollution with nitrates. The overall results show that multivariate statistical analysis can be successfully applied in the groundwater quality assessment of this karstic aquifer.
Scientific Elitism and the Information System of Science
ERIC Educational Resources Information Center
Amick, Daniel James
1973-01-01
Scientific elitism must be viewed as a multidimensional phenomenon. Ten variables of elitism are considered and a principal components factor analysis is used to scale this multivariate domain. Two significant dimensions of elitism were found; one in basic and one in applied science. (20 references) (Author)
40 CFR 408.11 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS CANNED AND PRESERVED SEAFOOD PROCESSING POINT SOURCE CATEGORY Farm-Raised Catfish Processing... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971...
Multiple Hypnotizabilities: Differentiating the Building Blocks of Hypnotic Response
ERIC Educational Resources Information Center
Woody, Erik Z.; Barnier, Amanda J.; McConkey, Kevin M.
2005-01-01
Although hypnotizability can be conceptualized as involving component subskills, standard measures do not differentiate them from a more general unitary trait, partly because the measures include limited sets of dichotomous items. To overcome this, the authors applied full-information factor analysis, a sophisticated analytic approach for…
40 CFR 408.141 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971, Environmental Protection Agency, Analytical Quality Control Laboratory, page 217. (c) The term seafood shall...
40 CFR 408.11 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971, Environmental Protection Agency, Analytical Quality Control Laboratory, page 217. (c) The term seafood shall...
40 CFR 408.51 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971, Environmental Protection Agency, Analytical Quality Control Laboratory, page 217. (c) The term seafood shall...
Construction of a Physician Skills Inventory
ERIC Educational Resources Information Center
Richard, George V.; Zarconi, Joseph; Savickas, Mark L.
2012-01-01
The current study applied Holland's RIASEC typology to develop a "Physician Skills Inventory". We identified the transferable skills and abilities that are critical to effective performance in medicine and had 140 physicians in 25 different specialties rate the importance of those skills. Principal component analysis of their responses produced…
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
NASA Astrophysics Data System (ADS)
Liu, Wen; Zhang, Yuying; Yang, Si; Han, Donghai
2018-05-01
A new technique to identify the floral resources of honeys is demanded. Terahertz time-domain attenuated total reflection spectroscopy combined with chemometrics methods was applied to discriminate different categorizes (Medlar honey, Vitex honey, and Acacia honey). Principal component analysis (PCA), cluster analysis (CA) and partial least squares-discriminant analysis (PLS-DA) have been used to find information of the botanical origins of honeys. Spectral range also was discussed to increase the precision of PLS-DA model. The accuracy of 88.46% for validation set was obtained, using PLS-DA model in 0.5-1.5 THz. This work indicated terahertz time-domain attenuated total reflection spectroscopy was an available approach to evaluate the quality of honey rapidly.
Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Spinelli, Antonello E.; Boschi, Federico
2011-12-01
Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.
A novel approach to analyzing fMRI and SNP data via parallel independent component analysis
NASA Astrophysics Data System (ADS)
Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas
2007-03-01
There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.
Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica
2016-04-19
The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.
NASA Astrophysics Data System (ADS)
Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi
2017-08-01
In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.
NASA Astrophysics Data System (ADS)
Lin, Jyh-Woei
2012-09-01
This paper uses Nonlinear Principal Component Analysis (NLPCA) and Principal Component Analysis (PCA) to determine Total Electron Content (TEC) anomalies in the ionosphere for the Nakri Typhoon on 29 May, 2008 (UTC). NLPCA, PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00-14:00 UT on 29 May 2008 when the wind was most intense. Results show that at a height of approximately 150-200 km the TEC anomaly using NLPCA is more localized; however its intensity increases with height and becomes more widespread. The TEC anomalies are not found by PCA. Potential causes of the results are discussed with emphasis given to vertical acoustic gravity waves. The approximate position of the typhoon's eye can be detected if the GIM is divided into fine enough maps with adequate spatial-resolution at GPS-TEC receivers. This implies that the trace of the typhoon in the regional GIM is caught using NLPCA.
Using independent component analysis for electrical impedance tomography
NASA Astrophysics Data System (ADS)
Yan, Peimin; Mo, Yulong
2004-05-01
Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.
Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2013-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development. Currently there is no fully coupled computational tool to analyze this fluid/structure interaction process. The objective of this study was to develop a fully coupled aeroelastic modeling capability to describe the fluid/structure interaction process during the transient nozzle operations. The aeroelastic model composes of three components: the computational fluid dynamics component based on an unstructured-grid, pressure-based computational fluid dynamics formulation, the computational structural dynamics component developed in the framework of modal analysis, and the fluid-structural interface component. The developed aeroelastic model was applied to the transient nozzle startup process of the Space Shuttle Main Engine at sea level. The computed nozzle side loads and the axial nozzle wall pressure profiles from the aeroelastic nozzle are compared with those of the published rigid nozzle results, and the impact of the fluid/structure interaction on nozzle side loads is interrogated and presented.
An analysis method for multi-component airfoils in separated flow
NASA Technical Reports Server (NTRS)
Rao, B. M.; Duorak, F. A.; Maskew, B.
1980-01-01
The multi-component airfoil program (Langley-MCARF) for attached flow is modified to accept the free vortex sheet separation-flow model program (Analytical Methods, Inc.-CLMAX). The viscous effects are incorporated into the calculation by representing the boundary layer displacement thickness with an appropriate source distribution. The separation flow model incorporated into MCARF was applied to single component airfoils. Calculated pressure distributions for angles of attack up to the stall are in close agreement with experimental measurements. Even at higher angles of attack beyond the stall, correct trends of separation, decrease in lift coefficients, and increase in pitching moment coefficients are predicted.
Rosales, Alirio
2017-04-01
Theories are composed of multiple interacting components. I argue that some theories have narratives as essential components, and that narratives function as integrative devices of the mathematical components of theories. Narratives represent complex processes unfolding in time as a sequence of stages, and hold the mathematical elements together as pieces in the investigation of a given process. I present two case studies from population genetics: R. A. Fisher's "mas selection" theory, and Sewall Wright's shifting balance theory. I apply my analysis to an early episode of the "R. A. Fisher - Sewall Wright controversy." Copyright © 2017 Elsevier Ltd. All rights reserved.
Source separation on hyperspectral cube applied to dermatology
NASA Astrophysics Data System (ADS)
Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.
2010-03-01
This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.
Structural health monitoring apparatus and methodology
NASA Technical Reports Server (NTRS)
Giurgiutiu, Victor (Inventor); Yu, Lingyu (Inventor); Bottai, Giola Santoni (Inventor)
2011-01-01
Disclosed is an apparatus and methodology for structural health monitoring (SHM) in which smart devices interrogate structural components to predict failure, expedite needed repairs, and thus increase the useful life of those components. Piezoelectric wafer active sensors (PWAS) are applied to or integrated with structural components and various data collected there from provide the ability to detect and locate cracking, corrosion, and disbanding through use of pitch-catch, pulse-echo, electro/mechanical impedance, and phased array technology. Stand alone hardware and an associated software program are provided that allow selection of multiple types of SHM investigations as well as multiple types of data analysis to perform a wholesome investigation of a structure.
Thermal analysis and optimization of the EAST ICRH antenna
NASA Astrophysics Data System (ADS)
Qingxi, YANG; Wei, SONG; Qunshan, DU; Yuntao, SONG; Chengming, QIN; Xinjun, ZHANG; Yanping, ZHAO
2018-02-01
The ion cyclotron resonance of frequency heating (ICRH) plays an important role in plasma heating. Two ICRH antennas were designed and applied on the EAST tokamak. In order to meet the requirement imposed by high-power and long-pulse operation of EAST in the future, an active cooling system is mandatory to be designed to remove the heat load deposited on the components. Thermal analyses for high heat-load components have been carried out, which presented clear temperature distribution on each component and provided the reference data to do the optimization. Meanwhile, heat pipes were designed to satisfy the high requirement imposed by a Faraday shield and lateral limiter.
NASA Astrophysics Data System (ADS)
Nishizawa, Tomoaki; Sugimoto, Nobuo; Shimizu, Atsushi; Uno, Itsushi; Hara, Yukari; Kudo, Rei
2018-04-01
We deployed multi-wavelength Mie-Raman lidars (MMRL) at three sites of the AD-Net and have conducted continuous measurements using them since 2013. To analyze the MMRL data and better understand the externally mixing state of main aerosol components (e.g., dust, sea-salt, and black carbon) in the atmosphere, we developed an integrated package of aerosol component retrieval algorithms, which have already been developed or are being developed, to estimate vertical profiles of the aerosol components. This package applies to the other ground-based lidar network data (e.g., EARLINET) and satellite-borne lidar data (e.g., CALIOP/CALIPSO and ATLID/EarthCARE) as well as the other lidar data of the AD-Net.
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G
2017-03-01
Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chitraningrum, Nidya; Chu, Ting-Yi; Huang, Ping-Tsung; Wen, Ten-Chin; Guo, Tzung-Fang
2018-02-01
We fabricate the phenyl-substituted poly(p-phenylene vinylene) copolymer (super yellow, SY-PPV)-based polymer light-emitting diodes (PLEDs) with different device architectures to modulate the injection of opposite charge carriers and investigate the corresponding magnetoconductance (MC) responses. At the first glance, we find that all PLEDs exhibit the positive MC responses. By applying the mathematical analysis to fit the curves with two empirical equations of a non-Lorentzian and a Lorentzian function, we are able to extract the hidden negative MC component from the positive MC curve. We attribute the growth of the negative MC component to the reduced interaction of the triplet excitons with charges to generate the free charge carriers as modulated by the applied magnetic field, known as the triplet exciton-charge reaction, by analyzing MC responses for PLEDs of the charge-unbalanced and hole-blocking device configurations. The negative MC component causes the broadening of the line shape in MC curves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.
1981-01-01
A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From thismore » analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained.« less
Yamada, Masanori; Yao, Ikuko; Hayasaka, Takahiro; Ushijima, Masaru; Matsuura, Masaaki; Takada, Hideho; Shikata, Nobuaki; Setou, Mitsutoshi; Kwon, A-Hon; Ito, Seiji
2012-02-01
Direct tissue analysis using matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) provides the means for in situ molecular analysis of a wide variety of biomolecules. This technology--known as imaging mass spectrometry (IMS)--allows the measurement of biomolecules in their native biological environments without the need for target-specific reagents such as antibodies. In this study, we applied the IMS technique to formalin-fixed paraffin-embedded samples to identify a substance(s) responsible for the intestinal obstruction caused by an unidentified foreign body. In advance of IMS analysis, some pretreatments were applied. After the deparaffinization of sections, samples were subjected to enzyme digestion. The sections co-crystallized with matrix were desorbed and ionized by a laser pulse with scanning. A combination of α-amylase digestion and the 2,5-dihydroxybenzoic acid matrix gave the best mass spectrum. With the IMS Convolution software which we developed, we could automatically extract meaningful signals from the IMS datasets. The representative peak values were m/z 1,013, 1,175, 1,337, 1,499, 1,661, 1,823, and 1,985. Thus, it was revealed that the material was polymer with a 162-Da unit size, calculated from the even intervals. In comparison with the mass spectra of the histopathological specimen and authentic materials, the main component coincided with amylopectin rather than amylose. Tandem MS analysis proved that the main components were oligosaccharides. Finally, we confirmed the identification of amylopectin by staining with periodic acid-Schiff and iodine. These results for the first time show the advantages of MALDI-IMS in combination with enzyme digestion for the direct analysis of oligosaccharides as a major component of histopathological samples.
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
Söhn, Matthias; Alber, Markus; Yan, Di
2007-09-01
The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as "eigenmodes," which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe approximately 94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses ( approximately 40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches.
Source localization of temporal lobe epilepsy using PCA-LORETA analysis on ictal EEG recordings.
Stern, Yaki; Neufeld, Miriam Y; Kipervasser, Svetlana; Zilberstein, Amir; Fried, Itzhak; Teicher, Mina; Adi-Japha, Esther
2009-04-01
Localizing the source of an epileptic seizure using noninvasive EEG suffers from inaccuracies produced by other generators not related to the epileptic source. The authors isolated the ictal epileptic activity, and applied a source localization algorithm to identify its estimated location. Ten ictal EEG scalp recordings from five different patients were analyzed. The patients were known to have temporal lobe epilepsy with a single epileptic focus that had a concordant MRI lesion. The patients had become seizure-free following partial temporal lobectomy. A midinterval (approximately 5 seconds) period of ictal activity was used for Principal Component Analysis starting at ictal onset. The level of epileptic activity at each electrode (i.e., the eigenvector of the component that manifest epileptic characteristic), was used as an input for low-resolution tomography analysis for EEG inverse solution (Zilberstain et al., 2004). The algorithm accurately and robustly identified the epileptic focus in these patients. Principal component analysis and source localization methods can be used in the future to monitor the progression of an epileptic seizure and its expansion to other areas.
Correlation between the pattern volatiles and the overall aroma of wild edible mushrooms.
de Pinho, P Guedes; Ribeiro, Bárbara; Gonçalves, Rui F; Baptista, Paula; Valentão, Patrícia; Seabra, Rosa M; Andrade, Paula B
2008-03-12
Volatile and semivolatile components of 11 wild edible mushrooms, Suillus bellini, Suillus luteus, Suillus granulatus, Tricholomopsis rutilans, Hygrophorus agathosmus, Amanita rubescens, Russula cyanoxantha, Boletus edulis, Tricholoma equestre, Fistulina hepatica, and Cantharellus cibarius, were determined by headspace solid-phase microextraction (HS-SPME) and by liquid extraction combined with gas chromatography-mass spectrometry (GC-MS). Fifty volatiles and nonvolatiles components were formally identified and 13 others were tentatively identified. Using sensorial analysis, the descriptors "mushroomlike", "farm-feed", "floral", "honeylike", "hay-herb", and "nutty" were obtained. A correlation between sensory descriptors and volatiles was observed by applying multivariate analysis (principal component analysis and agglomerative hierarchic cluster analysis) to the sensorial and chemical data. The studied edible mushrooms can be divided in three groups. One of them is rich in C8 derivatives, such as 3-octanol, 1-octen-3-ol, trans-2-octen-1-ol, 3-octanone, and 1-octen-3-one; another one is rich in terpenic volatile compounds; and the last one is rich in methional. The presence and contents of these compounds give a considerable contribution to the sensory characteristics of the analyzed species.
Study on Web-Based Tool for Regional Agriculture Industry Structure Optimization Using Ajax
NASA Astrophysics Data System (ADS)
Huang, Xiaodong; Zhu, Yeping
According to the research status of regional agriculture industry structure adjustment information system and the current development of information technology, this paper takes web-based regional agriculture industry structure optimization tool as research target. This paper introduces Ajax technology and related application frameworks to build an auxiliary toolkit of decision support system for agricultural policy maker and economy researcher. The toolkit includes a “one page” style component of regional agriculture industry structure optimization which provides agile arguments setting method that enables applying sensitivity analysis and usage of data and comparative advantage analysis result, and a component that can solve the linear programming model and its dual problem by simplex method.
NASA Astrophysics Data System (ADS)
LIN, JYH-WOEI
2012-08-01
Principal Component Analysis (PCA) and image processing are used to determine Total Electron Content (TEC) anomalies in the F-layer of the ionosphere relating to Typhoon Nakri for 29 May, 2008 (UTC). PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00-14:00 UT on 29 May, 2008 when the wind was most intense. Results show that at a height of approximately 150-200 km the TEC anomaly is highly localized; however, it becomes more intense and widespread with height. Potential causes of these results are discussed with emphasis given to acoustic gravity waves caused by wind force.
Modular thought in the circuit analysis
NASA Astrophysics Data System (ADS)
Wang, Feng
2018-04-01
Applied to solve the problem of modular thought, provides a whole for simplification's method, the complex problems have become of, and the study of circuit is similar to the above problems: the complex connection between components, make the whole circuit topic solution seems to be more complex, and actually components the connection between the have rules to follow, this article mainly tells the story of study on the application of the circuit modular thought. First of all, this paper introduces the definition of two-terminal network and the concept of two-terminal network equivalent conversion, then summarizes the common source resistance hybrid network modular approach, containing controlled source network modular processing method, lists the common module, typical examples analysis.
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Colniță, Alia; Dina, Nicoleta Elena; Leopold, Nicolae; Vodnar, Dan Cristian; Bogdan, Diana; Porav, Sebastian Alin; David, Leontin
2017-09-01
Raman scattering and its particular effect, surface-enhanced Raman scattering (SERS), are whole-organism fingerprinting spectroscopic techniques that gain more and more popularity in bacterial detection. In this work, two relevant Gram-positive bacteria species, Lactobacillus casei ( L. casei ) and Listeria monocytogenes ( L. monocytogenes ) were characterized based on their Raman and SERS spectral fingerprints. The SERS spectra were used to identify the biochemical structures of the bacterial cell wall. Two synthesis methods of the SERS-active nanomaterials were used and the recorded spectra were analyzed. L. casei and L. monocytogenes were successfully discriminated by applying Principal Component Analysis (PCA) to their specific spectral data.
Leopold, Nicolae; Vodnar, Dan Cristian; Bogdan, Diana; Porav, Sebastian Alin; David, Leontin
2017-01-01
Raman scattering and its particular effect, surface-enhanced Raman scattering (SERS), are whole-organism fingerprinting spectroscopic techniques that gain more and more popularity in bacterial detection. In this work, two relevant Gram-positive bacteria species, Lactobacillus casei (L. casei) and Listeria monocytogenes (L. monocytogenes) were characterized based on their Raman and SERS spectral fingerprints. The SERS spectra were used to identify the biochemical structures of the bacterial cell wall. Two synthesis methods of the SERS-active nanomaterials were used and the recorded spectra were analyzed. L. casei and L. monocytogenes were successfully discriminated by applying Principal Component Analysis (PCA) to their specific spectral data. PMID:28862655
NASA Astrophysics Data System (ADS)
Kowalski, Dariusz; Grzyl, Beata; Kristowski, Adam
2017-09-01
Steel materials, due to their numerous advantages - high availability, easiness of processing and possibility of almost any shaping are commonly applied in construction for carrying out basic carrier systems and auxiliary structures. However, the major disadvantage of this material is its high corrosion susceptibility, which depends strictly on the local conditions of the facility and the applied type of corrosion protection system. The paper presents an analysis of life cycle costs of structures installed on bridges used in the road lane conditions. Three anti-corrosion protection systems were considered, analyzing their essential cost components. The possibility of reducing significantly the costs associated with anti-corrosion protection at the stage of steel barriers maintenance over a period of 30 years has been indicated. The possibility of using a new approach based on the life cycle cost estimation in the anti-corrosion protection of steel elements is presented. The relationship between the method of steel barrier protection, the scope of repair, renewal work and costs is shown. The article proposes an optimal solution which, while reducing the cost of maintenance of road infrastructure components in the area of corrosion protection, allows to maintain certain safety standards for steel barriers that are installed on the bridge.
Pandžić, Elvis; Abu-Arish, Asmahan; Whan, Renee M; Hanrahan, John W; Wiseman, Paul W
2018-02-16
Molecular, vesicular and organellar flows are of fundamental importance for the delivery of nutrients and essential components used in cellular functions such as motility and division. With recent advances in fluorescence/super-resolution microscopy modalities we can resolve the movements of these objects at higher spatio-temporal resolutions and with better sensitivity. Previously, spatio-temporal image correlation spectroscopy has been applied to map molecular flows by correlation analysis of fluorescence fluctuations in image series. However, an underlying assumption of this approach is that the sampled time windows contain one dominant flowing component. Although this was true for most of the cases analyzed earlier, in some situations two or more different flowing populations can be present in the same spatio-temporal window. We introduce an approach, termed velocity landscape correlation (VLC), which detects and extracts multiple flow components present in a sampled image region via an extension of the correlation analysis of fluorescence intensity fluctuations. First we demonstrate theoretically how this approach works, test the performance of the method with a range of computer simulated image series with varying flow dynamics. Finally we apply VLC to study variable fluxing of STIM1 proteins on microtubules connected to the plasma membrane of Cystic Fibrosis Bronchial Epithelial (CFBE) cells. Copyright © 2018 Elsevier Inc. All rights reserved.
A Multimodal Emotion Detection System during Human-Robot Interaction
Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.
2013-01-01
In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598
A Typology of Students Based on Academic Entitlement
ERIC Educational Resources Information Center
Luckett, Michael; Trocchia, Philip J.; Noel, Noel Mark; Marlin, Dan
2017-01-01
Two hundred ninety-three university business students were surveyed using an academic entitlement (AE) scale updated to include new technologies. Using factor analysis, three components of AE were identified: grade entitlement, behavioral entitlement, and service entitlement. A k-means clustering procedure was then applied to identify four groups…
A Factor Analytic Validation of Holland's Vocational Preference Inventory
ERIC Educational Resources Information Center
Di Scipio, William J.
1974-01-01
A principal components analysis was applied to a 135-item pool of the Holland Vocational Preference Inventory, Sixth Revision. The a priori clinical scales were partially upheld with differences attributed to the characteristics of the sample and sociopolitical time context during which the test was administered. (Author)
The Incredible Shrinking Institution: A Five-Component Downsizing Model.
ERIC Educational Resources Information Center
Dawson, Bradley L.
1991-01-01
Most colleges and universities need to reduce expenditures and downsize. Such a project is difficult and emotionally charged. Many immediate remedies can ba applied to reduce and better control expenditures, but a more thorough analysis of the institution's organizational structure, operating procedures, automated systems, strategic planning, and…
Wear studies made of slip rings and gas bearing components
NASA Technical Reports Server (NTRS)
Furr, A. K.
1967-01-01
Neutron activation analysis techniques were employed for the study of the wear and performance characteristics of slip ring and rotor assemblies and of the problems arising from environmental conditions with special reference to surface contamination. Results showed that the techniques could be successfully applied to measurement of wear parameters.
Cold Spray Technology for Repair of Magnesium Rotorcraft Components (Briefing Charts)
2007-01-01
control valve Nozzle Braided flex hose Helium Tank Powder Feeder Spray Nozzle ARL Portable System Parameters for Applying CP-Al to ZE41A - Mg...and Advantages of Cold Spray •Present Test Results to Date •Coating Integrity and Microstructural Analysis •Adhesion, Hardness and Corrosion Tests
40 CFR 408.41 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... shall apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971, Environmental Protection Agency, Analytical Quality Control Laboratory, page 217. (c) The term seafood shall...
Microcircuit Modeling and Simulation beyond Ohm's Law
ERIC Educational Resources Information Center
Saxena, T.; Chek, D. C. Y.; Tan, M. L. P.; Arora, V. K.
2011-01-01
Circuit theory textbooks rely heavily on the applicability of Ohm's law, which collapses as electronic components reach micro- and nanoscale dimensions. Circuit analysis is examined in the regime where the applied voltage V is greater than the critical voltage V[subscript c], which triggers the nonlinear behavior. The critical voltage is infinity…
Risk and value analysis of SETI
NASA Technical Reports Server (NTRS)
Billingham, J.
1990-01-01
This paper attempts to apply a traditional risk and value analysis to the Search for Extraterrestrial Intelligence--SETI. In view of the difficulties of assessing the probability of success, a comparison is made between SETI and a previous search for extraterrestrial life, the biological component of Project Viking. Our application of simple Utility Theory, given some reasonable assumptions, suggests that SETI is at least as worthwhile as the biological experiment on Viking.
Risk and value analysis of SETI.
Billingham, J
1990-01-01
This paper attempts to apply a traditional risk and value analysis to the Search for Extraterrestrial Intelligence--SETI. In view of the difficulties of assessing the probability of success, a comparison is made between SETI and a previous search for extraterrestrial life, the biological component of Project Viking. Our application of simple Utility Theory, given some reasonable assumptions, suggests that SETI is at least as worthwhile as the biological experiment on Viking.
NASA Technical Reports Server (NTRS)
Tesch, W. A.; Moszee, R. H.; Steenken, W. G.
1976-01-01
NASA developed stability and frequency response analysis techniques were applied to a dynamic blade row compression component stability model to provide a more economic approach to surge line and frequency response determination than that provided by time-dependent methods. This blade row model was linearized and the Jacobian matrix was formed. The clean-inlet-flow stability characteristics of the compressors of two J85-13 engines were predicted by applying the alternate Routh-Hurwitz stability criterion to the Jacobian matrix. The predicted surge line agreed with the clean-inlet-flow surge line predicted by the time-dependent method to a high degree except for one engine at 94% corrected speed. No satisfactory explanation of this discrepancy was found. The frequency response of the linearized system was determined by evaluating its Laplace transfer function. The results of the linearized-frequency-response analysis agree with the time-dependent results when the time-dependent inlet total-pressure and exit-flow function amplitude boundary conditions are less than 1 percent and 3 percent, respectively. The stability analysis technique was extended to a two-sector parallel compressor model with and without interstage crossflow and predictions were carried out for total-pressure distortion extents of 180 deg, 90 deg, 60 deg, and 30 deg.
Analysis of Minor Component Segregation in Ternary Powder Mixtures
NASA Astrophysics Data System (ADS)
Asachi, Maryam; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew
2017-06-01
In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.
Dumont, Martine; Jurysta, Fabrice; Lanquart, Jean-Pol; Noseda, André; van de Borne, Philippe; Linkowski, Paul
2007-12-01
To investigate the dynamics of the synchronization between heart rate variability and sleep electroencephalogram power spectra and the effect of sleep apnea-hypopnea syndrome. Heart rate and sleep electroencephalogram signals were recorded in controls and patients with sleep apnea-hypopnea syndrome that were matched for age, gender, sleep parameters, and blood pressure. Spectral analysis was applied to electrocardiogram and electroencephalogram sleep recordings to obtain power values every 20s. Synchronization likelihood was computed between time series of the normalized high frequency spectral component of RR-intervals and all electroencephalographic frequency bands. Detrended fluctuation analysis was applied to the synchronizations in order to qualify their dynamic behaviors. For all sleep bands, the fluctuations of the synchronization between sleep EEG and heart activity appear scale free and the scaling exponent is close to one as for 1/f noise. We could not detect any effect due to sleep apnea-hypopnea syndrome. The synchronizations between the high frequency component of heart rate variability and all sleep power bands exhibited robust fluctuations characterized by self-similar temporal behavior of 1/f noise type. No effects of sleep apnea-hypopnea syndrome were observed in these synchronizations. Sleep apnea-hypopnea syndrome does not affect the interdependence between the high frequency component of heart rate variability and all sleep power bands as measured by synchronization likelihood.
Rein, Thomas R; Harvati, Katerina; Harrison, Terry
2015-01-01
Uncovering links between skeletal morphology and locomotor behavior is an essential component of paleobiology because it allows researchers to infer the locomotor repertoire of extinct species based on preserved fossils. In this study, we explored ulnar shape in anthropoid primates using 3D geometric morphometrics to discover novel aspects of shape variation that correspond to observed differences in the relative amount of forelimb suspensory locomotion performed by species. The ultimate goal of this research was to construct an accurate predictive model that can be applied to infer the significance of these behaviors. We studied ulnar shape variation in extant species using principal component analysis. Species mainly clustered into phylogenetic groups along the first two principal components. Upon closer examination, the results showed that the position of species within each major clade corresponded closely with the proportion of forelimb suspensory locomotion that they have been observed to perform in nature. We used principal component regression to construct a predictive model for the proportion of these behaviors that would be expected to occur in the locomotor repertoire of anthropoid primates. We then applied this regression analysis to Pliopithecus vindobonensis, a stem catarrhine from the Miocene of central Europe, and found strong evidence that this species was adapted to perform a proportion of forelimb suspensory locomotion similar to that observed in the extant woolly monkey, Lagothrix lagothricha. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.
2013-10-01
This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 yr time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 BP and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather or climate driven noise in the 10Be deposition flux during different stages of climate. The production signal varies on lower frequencies, dominated by the 11yr solar cycle within the 30 yr time scale of these experiments. The climatic noise is of higher frequencies. We first apply empirical orthogonal functions (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis on the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low frequency components and the long term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high frequency components represent climate driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.
NASA Astrophysics Data System (ADS)
Molina-Aguilera, A.; Mancilla, F. D. L.; Julià, J.; Morales, J.
2017-12-01
Joint inversion techniques of P-receiver functions and wave dispersion data implicitly assume an isotropic radial stratified earth. The conventional approach invert stacked radial component receiver functions from different back-azimuths to obtain a laterally homogeneous single-velocity model. However, in the presence of strong lateral heterogeneities as anisotropic layers and/or dipping interfaces, receiver functions are considerably perturbed and both the radial and transverse components exhibit back azimuthal dependences. Harmonic analysis methods exploit these azimuthal periodicities to separate the effects due to the isotropic flat-layered structure from those effects caused by lateral heterogeneities. We implement a harmonic analysis method based on radial and transverse receiver functions components and carry out a synthetic study to illuminate the capabilities of the method in isolating the isotropic flat-layered part of receiver functions and constrain the geometry and strength of lateral heterogeneities. The independent of the baz P receiver function are jointly inverted with phase and group dispersion curves using a linearized inversion procedure. We apply this approach to high dense seismic profiles ( 2 km inter-station distance, see figure) located in the central Betics (western Mediterranean region), a region which has experienced complex geodynamic processes and exhibit strong variations in Moho topography. The technique presented here is robust and can be applied systematically to construct a 3-D model of the crust and uppermost mantle across large networks.
Evaluation of FTIR spectroscopy as diagnostic tool for colorectal cancer using spectral analysis
NASA Astrophysics Data System (ADS)
Dong, Liu; Sun, Xuejun; Chao, Zhang; Zhang, Shiyun; Zheng, Jianbao; Gurung, Rajendra; Du, Junkai; Shi, Jingsen; Xu, Yizhuang; Zhang, Yuanfu; Wu, Jinguang
2014-03-01
The aim of this study is to confirm FTIR spectroscopy as a diagnostic tool for colorectal cancer. 180 freshly removed colorectal samples were collected from 90 patients for spectrum analysis. The ratios of spectral intensity and relative intensity (/I1460) were calculated. Principal component analysis (PCA) and Fisher's discriminant analysis (FDA) were applied to distinguish the malignant from normal. The FTIR parameters of colorectal cancer and normal tissues were distinguished due to the contents or configurations of nucleic acids, proteins, lipids and carbohydrates. Related to nitrogen containing, water, protein and nucleic acid were increased significantly in the malignant group. Six parameters were selected as independent factors to perform discriminant functions. The sensitivity for FTIR in diagnosing colorectal cancer was 96.6% by discriminant analysis. Our study demonstrates that FTIR can be a useful technique for detection of colorectal cancer and may be applied in clinical colorectal cancer diagnosis.
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis
Wagatsuma, Hiroaki
2017-01-01
EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221
Dawidowicz, Andrzej L; Czapczyńska, Natalia B; Wianowska, Dorota
2013-02-01
Sea Sand Disruption Method (SSDM) is a simple and cheap sample-preparation procedure allowing the reduction of organic solvent consumption, exclusion of sample component degradation, improvement of extraction efficiency and selectivity, and elimination of additional sample clean-up and pre-concentration step before chromatographic analysis. This article deals with the possibility of SSDM application for the differentiation of essential-oils components occurring in the Scots pine (Pinus sylvestris L.) and cypress (Cupressus sempervirens L.) needles from Madrid (Spain), Laganas (Zakhyntos, Greece), Cala Morell (Menorca, Spain), Lublin (Poland), Helsinki (Finland), and Oradea (Romania). The SSDM results are related to the analogous - obtained applying two other sample preparation methods - steam distillation and Pressurized Liquid Extraction (PLE). The results presented established that the total amount and the composition of essential-oil components revealed by SSDM are equivalent or higher than those obtained by one of the most effective extraction technique, PLE. Moreover, SSDM seems to provide the most representative profile of all essential-oil components as no heat is applied. Thus, this environmentally friendly method is suggested to be used as the main extraction procedure for the differentiation of essential-oil components in conifers for scientific and industrial purposes. Copyright © 2013 Verlag Helvetica Chimica Acta AG, Zürich.
Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map
An, Yan; Zou, Zhihong; Li, Ranran
2016-01-01
In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009–2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data. PMID:26761018
Variability search in M 31 using principal component analysis and the Hubble Source Catalogue
NASA Astrophysics Data System (ADS)
Moretti, M. I.; Hatzidimitriou, D.; Karampelas, A.; Sokolovsky, K. V.; Bonanos, A. Z.; Gavras, P.; Yang, M.
2018-06-01
Principal component analysis (PCA) is being extensively used in Astronomy but not yet exhaustively exploited for variability search. The aim of this work is to investigate the effectiveness of using the PCA as a method to search for variable stars in large photometric data sets. We apply PCA to variability indices computed for light curves of 18 152 stars in three fields in M 31 extracted from the Hubble Source Catalogue. The projection of the data into the principal components is used as a stellar variability detection and classification tool, capable of distinguishing between RR Lyrae stars, long-period variables (LPVs) and non-variables. This projection recovered more than 90 per cent of the known variables and revealed 38 previously unknown variable stars (about 30 per cent more), all LPVs except for one object of uncertain variability type. We conclude that this methodology can indeed successfully identify candidate variable stars.
Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map.
An, Yan; Zou, Zhihong; Li, Ranran
2016-01-08
In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009-2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data.
Operation and maintenance results from ISFOC CPV plants
NASA Astrophysics Data System (ADS)
Gil, Eduardo; Martinez, María; de la Rubia, Oscar
2017-09-01
The analysis of field operation and maintenance data collected during a period of over eight years, from CPV installations consisting of three different CPV technologies (including second generation of one of these technologies), has allowed us to get valuable information about the long-term degradation of the CPV systems. Through the study of the maintenance control ratio previously defined and by applying the root cause analysis methodology, the components responsible for the most unplanned interventions for each technology were identified. Focusing maintenance efforts on these components, a reduction of the unplanned interventions and the total cost of maintenance has been achieved over the years. Therefore, the deployment of an effective maintenance plan, identifying critical components, is essential to minimize the risk for investors and maximize the CPV power plants lifetime and energy output, increasing the availability of CPV installations, boosting market confidence in CPV systems.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Fast, Exact Bootstrap Principal Component Analysis for p > 1 million
Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim
2015-01-01
Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801
Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie
2011-11-01
The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.
Portable XRF and principal component analysis for bill characterization in forensic science.
Appoloni, C R; Melquiades, F L
2014-02-01
Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bhushan, A.; Sharker, M. H.; Karimi, H. A.
2015-07-01
In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
SCBUCKLE user's manual: Buckling analysis program for simple supported and clamped panels
NASA Technical Reports Server (NTRS)
Cruz, Juan R.
1993-01-01
The program SCBUCKLE calculates the buckling loads and mode shapes of cylindrically curved, rectangular panels. The panel is assumed to have no imperfections. SCBUCKLE is capable of analyzing specially orthotropic symmetric panels (i.e., A(sub 16) = A(sub 26) = 0.0, D(sub 16) = D(sub 26) = 0.0, B(sub ij) = 0.0). The analysis includes first-order transverse shear theory and is capable of modeling sandwich panels. The analysis supports two types of boundary conditions: either simply supported or clamped on all four edges. The panel can be subjected to linearly varying normal loads N(sub x) and N(sub y) in addition to a constant shear load N(sub xy). The applied loads can be divided into two parts: a preload component; and a variable (eigenvalue-dependent) component. The analysis is based on the modified Donnell's equations for shallow shells. The governing equations are solved by Galerkin's method.
Classification of adulterated honeys by multivariate analysis.
Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad
2017-06-01
In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Characterizing resonant component in speech: A different view of tracking fundamental frequency
NASA Astrophysics Data System (ADS)
Dong, Bin
2017-05-01
Inspired by the nonlinearity and nonstationarity and the modulations in speech, Hilbert-Huang Transform and cyclostationarity analysis are employed to investigate the speech resonance in vowel in sequence. Cyclostationarity analysis is not directly manipulated on the target vowel, but on its intrinsic mode functions one by one. Thanks to the equivalence between the fundamental frequency in speech and the cyclic frequency in cyclostationarity analysis, the modulation intensity distributions of the intrinsic mode functions provide much information for the estimation of the fundamental frequency. To highlight the relationship between frequency and time, the pseudo-Hilbert spectrum is proposed to replace the Hilbert spectrum here. After contrasting the pseudo-Hilbert spectra of and the modulation intensity distributions of the intrinsic mode functions, it finds that there is usually one intrinsic mode function which works as the fundamental component of the vowel. Furthermore, the fundamental frequency of the vowel can be determined by tracing the pseudo-Hilbert spectrum of its fundamental component along the time axis. The later method is more robust to estimate the fundamental frequency, when meeting nonlinear components. Two vowels [a] and [i], picked up from a speech database FAU Aibo Emotion Corpus, are applied to validate the above findings.
NASA Astrophysics Data System (ADS)
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
NASA Technical Reports Server (NTRS)
Cirillo, William M.; Earle, Kevin D.; Goodliff, Kandyce E.; Reeves, J. D.; Stromgren, Chel; Andraschko, Mark R.; Merrill, R. Gabe
2008-01-01
NASA s Constellation Program employs a strategic analysis methodology in providing an integrated analysis capability of Lunar exploration scenarios and to support strategic decision-making regarding those scenarios. The strategic analysis methodology integrates the assessment of the major contributors to strategic objective satisfaction performance, affordability, and risk and captures the linkages and feedbacks between all three components. Strategic analysis supports strategic decision making by senior management through comparable analysis of alternative strategies, provision of a consistent set of high level value metrics, and the enabling of cost-benefit analysis. The tools developed to implement the strategic analysis methodology are not element design and sizing tools. Rather, these models evaluate strategic performance using predefined elements, imported into a library from expert-driven design/sizing tools or expert analysis. Specific components of the strategic analysis tool set include scenario definition, requirements generation, mission manifesting, scenario lifecycle costing, crew time analysis, objective satisfaction benefit, risk analysis, and probabilistic evaluation. Results from all components of strategic analysis are evaluated a set of pre-defined figures of merit (FOMs). These FOMs capture the high-level strategic characteristics of all scenarios and facilitate direct comparison of options. The strategic analysis methodology that is described in this paper has previously been applied to the Space Shuttle and International Space Station Programs and is now being used to support the development of the baseline Constellation Program lunar architecture. This paper will present an overview of the strategic analysis methodology and will present sample results from the application of the strategic analysis methodology to the Constellation Program lunar architecture.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Effect of noise in principal component analysis with an application to ozone pollution
NASA Astrophysics Data System (ADS)
Tsakiri, Katerina G.
This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction
NMR apparatus for in situ analysis of fuel cells
Gerald, II, Rex E; Rathke, Jerome W
2012-11-13
The subject apparatus is a fuel cell toroid cavity detector for in situ analysis of samples through the use of nuclear magnetic resonance. The toroid cavity detector comprises a gas-tight housing forming a toroid cavity where the housing is exposed to an externally applied magnetic field B.sub.0 and contains fuel cell component samples to be analyzed. An NMR spectrometer is electrically coupled and applies a radiofrequency excitation signal pulse to the detector to produce a radiofrequency magnetic field B.sub.1 in the samples and in the toroid cavity. Embedded coils modulate the static external magnetic field to provide a means for spatial selection of the recorded NMR signals.
Li, Jian-Ping; Liu, Yang; Guo, Jian-Ming; Shang, Er-Xin; Zhu, Zhen-Hua; Zhu, Kevin Y; Tang, Yu-Ping; Zhao, Bu-Chang; Tang, Zhi-Shu; Duan, Jin-Ao
2017-01-01
Stability of traditional Chinese medicine injection (TCMI) is an important issue related with its clinical application. TCMI is composed of multi-components, therefore, when evaluating TCMI stability, several marker compounds cannot represent global components or biological activities of TCMI. Till now, when evaluating TCMI stability, method involving the global components or biological activities has not been reported. In this paper, we established a comprehensive strategy composed of three different methods to evaluate the chemical and biological stability of a typical TCMI, Danhong injection (DHI). UHPLC-TQ/MS was used to analyze the stability of marker compounds (SaA, SaB, RA, DSS, PA, CA, and SG) in DHI, UHPLC-QTOF/MS was used to analyze the stability of global components (MW 80-1000 Da) in DHI, and cell based antioxidant capability assay was used to evaluate the bioactivity of DHI. We applied this strategy to assess the compatible stability of DHI and six infusion solutions (GS, NS, GNS, FI, XI, and DGI), which were commonly used in combination with DHI in clinic. GS was the best infusion solution for DHI, and DGI was the worst one based on marker compounds analysis. Based on global components analysis, XI and DGI were the worst infusion solutions for DHI. And based on bioactivity assay, GS was the best infusion solution for DHI, and XI was the worst one. In conclusion, as evaluated by the established comprehensive strategy, GS was the best infusion solution, however, XI and DGI were the worst infusion solutions for DHI. In the compatibility of DHI and XI or DGI, salvianolic acids in DHI would be degraded, resulting in the reduction of original composition and generation of new components, and leading to the changes of biological activities. This is the essence of instability compatibility of DHI and some infusion solutions. Our study provided references for choosing the reasonable infusion solutions for DHI, which could contribute the improvement of safety and efficacy of DHI. Moreover, the established strategy may be applied for the compatible stability evaluation of other TCMIs.
Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.
de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard
2018-02-01
Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
A Geometric Interpretation of the Effective Uniaxial Anisotropy Field in Magnetic Films
NASA Astrophysics Data System (ADS)
Kozlov, V. I.
2018-01-01
It is shown that the effective uniaxial anisotropy field that is usually applied in thin magnetic films (TMFs), which is noncollinear to the magnetization vector, is insufficient for deeper understanding of these processes, although it explains many physical processes in films. The analysis of the magnetization discontinuity in films under certain conditions yields the component of the effective uniaxial anisotropy field collinear to the magnetization vector. This component explains the magnetization discontinuity and allows one to speak of the total effective uniaxial anisotropy field in TMFs.
NASA Astrophysics Data System (ADS)
Xu, Roger; Stevenson, Mark W.; Kwan, Chi-Man; Haynes, Leonard S.
2001-07-01
At Ford Motor Company, thrust bearing in drill motors is often damaged by metal chips. Since the vibration frequency is several Hz only, it is very difficult to use accelerometers to pick up the vibration signals. Under the support of Ford and NASA, we propose to use a piezo film as a sensor to pick up the slow vibrations of the bearing. Then a neural net based fault detection algorithm is applied to differentiate normal bearing from bad bearing. The first step involves a Fast Fourier Transform which essentially extracts the significant frequency components in the sensor. Then Principal Component Analysis is used to further reduce the dimension of the frequency components by extracting the principal features inside the frequency components. The features can then be used to indicate the status of bearing. Experimental results are very encouraging.
Chen, Gengsheng; de las Fuentes, Lisa; Gu, Chi C; He, Jiang; Gu, Dongfeng; Kelly, Tanika; Hixson, James; Jacquish, Cashell; Rao, D C; Rice, Treva K
2015-06-20
Hypertension is a complex trait that often co-occurs with other conditions such as obesity and is affected by genetic and environmental factors. Aggregate indices such as principal components among these variables and their responses to environmental interventions may represent novel information that is potentially useful for genetic studies. In this study of families participating in the Genetic Epidemiology Network of Salt Sensitivity (GenSalt) Study, blood pressure (BP) responses to dietary sodium interventions are explored. Independent component analysis (ICA) was applied to 20 variables indexing obesity and BP measured at baseline and during low sodium, high sodium and high sodium plus potassium dietary intervention periods. A "heat map" protocol that classifies subjects based on risk for hypertension is used to interpret the extracted components. ICA and heat map suggest four components best describe the data: (1) systolic hypertension, (2) general hypertension, (3) response to sodium intervention and (4) obesity. The largest heritabilities are for the systolic (64%) and general hypertension (56%) components. There is a pattern of higher heritability for the component response to intervention (40-42%) as compared to those for the traditional intervention responses computed as delta scores (24%-40%). In summary, the present study provides intermediate phenotypes that are heritable. Using these derived components may prove useful in gene discovery applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, F. S.
Functionally graded components exhibit spatial variations of mechanical properties in contrast with, and as an alternative to, purely homogeneous components. A large class of graded materials, however, are in fact mostly homogeneous materials with property variations (chemical or mechanical) restricted to a specific area or layer produced by applying for example a coating or by introducing sub-surface residual stresses. However, it is also possible to obtain graded materials with a smooth transition of mechanical properties along the entire component, for example in a 40 mm component. This is possible, for example, by using centrifugal casting technique or incremental melting andmore » solidification technique. In this paper we will study fully metallic functionally graded components with a smooth gradient, focusing on fatigue crack propagation. Fatigue propagation will be assessed in the direction parallel to the gradation (in different homogeneous layers of the functionally graded component) to assess what would be fatigue crack propagation on the direction perpendicular to the gradation. Fatigue crack growth rate (standard mode I fatigue crack growth) will be correlated to the mode I stress intensity factor range. Other mechanical properties of different layers of the component (Young's modulus) will also be considered in this analysis. The effect of residual stresses along the component gradation on crack propagation will also be taken into account. A qualitative analysis of the effects of some important features, present in functionally graded materials, will be made based on the obtained results.« less
Prediction of pH of fresh chicken breast fillets by VNIR hyperspectral imaging
USDA-ARS?s Scientific Manuscript database
Visible and near-infrared (VNIR) hyperspectral imaging (400–900 nm) was used to evaluate pH of fresh chicken breast fillets (pectoralis major muscle) from the bone (dorsal) side of individual fillets. After the principal component analysis (PCA), a band threshold method was applied to the first prin...
Coping With Pain: Studies in Stress Inoculation.
ERIC Educational Resources Information Center
Horan, John J.; And Others
The stress-inoculation paradigm for helping clients deal with pain consists of education about the psychological dimensions of pain, training in a number of coping skills relevant to each dimension, and practice in applying these skills to the noxious stimulus. Presented are two studies, the first of which represents a component analysis of stress…
Information Technologies in the System of Military Engineer Training of Cadets
ERIC Educational Resources Information Center
Khizhnaya, Anna V.; Kutepov, Maksim M.; Gladkova, Marina N.; Gladkov, Alexey V.; Dvornikova, Elena I.
2016-01-01
The necessity of enhancement of the information component in the military engineer training is determined by the result of a comparative analysis of global and national engineering education standards. The purpose is to substantiate the effectiveness and relevance of applying information technology in the system of military engineer training of…
Further Iterations on Using the Problem-Analysis Framework
ERIC Educational Resources Information Center
Annan, Michael; Chua, Jocelyn; Cole, Rachel; Kennedy, Emma; James, Robert; Markusdottir, Ingibjorg; Monsen, Jeremy; Robertson, Lucy; Shah, Sonia
2013-01-01
A core component of applied educational and child psychology practice is the skilfulness with which practitioners are able to rigorously structure and conceptualise complex real world human problems. This is done in such a way that when they (with others) jointly work on them, there is an increased likelihood of positive outcomes being achieved…
ERIC Educational Resources Information Center
Kalayci, Nurdan; Cimen, Orhan
2012-01-01
The aim of this study is to examine the questionnaires used to evaluate teaching performance in higher education institutes and called "Instructor and Course Evaluation Questionnaires (ICEQ)" in terms of questionnaire preparation techniques and components of curriculum. Obtaining at least one ICEQ belonging to any state and private…
In this study, temporal scale analysis is applied as a technique to evaluate an annual simulation of meteorology, O3, and PM2.5 and its chemical components over the continental U.S. utilizing two modeling systems. It is illustrated that correlations were ins...
Challenge in Enhancing the Teaching and Learning of Variable Measurements in Quantitative Research
ERIC Educational Resources Information Center
Kee, Chang Peng; Osman, Kamisah; Ahmad, Fauziah
2013-01-01
Statistical analysis is one component that cannot be avoided in a quantitative research. Initial observations noted that students in higher education institution faced difficulty analysing quantitative data which were attributed to the confusions of various variable measurements. This paper aims to compare the outcomes of two approaches applied in…
Method for assessing motor insulation on operating motors
Kueck, John D.; Otaduy, Pedro J.
1997-01-01
A method for monitoring the condition of electrical-motor-driven devices. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques.
A Comparative Study of Parental and Filial Role Definitions
ERIC Educational Resources Information Center
Safilios Rothschild, Constantina; Georgiopoulos, John
1970-01-01
Data analysis for the American and Greek cultures indicate following trends: (1) parents of both sexes tend to define roles in terms of instrumental and expressive components, suggesting that Parsonian typology, if valid at all, applies more to lower and working class; (2) no significant social differences exist; and (3) family modernization along…
Structural analysis of gluten-free doughs by fractional rheological model
NASA Astrophysics Data System (ADS)
Orczykowska, Magdalena; Dziubiński, Marek; Owczarz, Piotr
2015-02-01
This study examines the effects of various components of tested gluten-free doughs, such as corn starch, amaranth flour, pea protein isolate, and cellulose in the form of plantain fibers on rheological properties of such doughs. The rheological properties of gluten-free doughs were assessed by using the rheological fractional standard linear solid model (FSLSM). Parameter analysis of the Maxwell-Wiechert fractional derivative rheological model allows to state that gluten-free doughs present a typical behavior of viscoelastic quasi-solid bodies. We obtained the contribution dependence of each component used in preparations of gluten-free doughs (either hard-gel or soft-gel structure). The complicate analysis of the mechanical structure of gluten-free dough was done by applying the FSLSM to explain quite precisely the effects of individual ingredients of the dough on its rheological properties.
NASA Astrophysics Data System (ADS)
Huang, Liang; Ni, Xuan; Ditto, William L.; Spano, Mark; Carney, Paul R.; Lai, Ying-Cheng
2017-01-01
We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on-off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.
Quantitative analysis of NMR spectra with chemometrics
NASA Astrophysics Data System (ADS)
Winning, H.; Larsen, F. H.; Bro, R.; Engelsen, S. B.
2008-01-01
The number of applications of chemometrics to series of NMR spectra is rapidly increasing due to an emerging interest for quantitative NMR spectroscopy e.g. in the pharmaceutical and food industries. This paper gives an analysis of advantages and limitations of applying the two most common chemometric procedures, Principal Component Analysis (PCA) and Multivariate Curve Resolution (MCR), to a designed set of 231 simple alcohol mixture (propanol, butanol and pentanol) 1H 400 MHz spectra. The study clearly demonstrates that the major advantage of chemometrics is the visualisation of larger data structures which adds a new exploratory dimension to NMR research. While robustness and powerful data visualisation and exploration are the main qualities of the PCA method, the study demonstrates that the bilinear MCR method is an even more powerful method for resolving pure component NMR spectra from mixtures when certain conditions are met.
Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A
2011-09-26
The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America
Applications of HPLC/MS in the analysis of traditional Chinese medicines
Li, Miao; Hou, Xiao-Fang; Zhang, Jie; Wang, Si-Cen; Fu, Qiang; He, Lang-Chong
2012-01-01
In China, traditional Chinese medicines (TCMs) have been used in clinical applications for thousands of years. The successful hyphenation of high-Performance liquid chromatography (HPLC) and mass spectrometry (MS) has been applied widely in TCMs and biological samples analysis. Undoubtedly, HPLC/MS technique has facilitated the understanding of the treatment mechanism of TCMs. We reviewed more than 350 published papers within the last 5 years on HPLC/MS in the analysis of TCMs. The present review focused on the applications of HPLC/MS in the component analysis, metabolites analysis, and pharmacokinetics of TCMs etc. 50% of the literature is related to the component analysis of TCMs, which show that this field is the most populär type of research. In the metabolites analysis, HPLC coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry has been demonstrated to be the powerful tool for the characterization of structural features and fragmentation behavior patterns. This paper presented a brief overview of the applications of HPLC/MS in the analysis of TCMs. HPLC/MS in the fingerprint analysis is reviewed elsewhere. PMID:29403684
Program for plasma-sprayed self-lubricating coatings
NASA Technical Reports Server (NTRS)
Walther, G. C.
1979-01-01
A method for preparing composite powders of the three coating components was developed and a procedure that can be used in applying uniform coatings of the composite powders was demonstrated. Composite powders were prepared by adjusting particle sizes of the components and employing a small amount of monoaluminum phosphate as an inorganic binder. Quantitative microscopy (image analysis) was found to be a convenient method of characterizing the composition of the multiphase plasma-sprayed coatings. Area percentages and distribution of the components were readily obtained by this method. The adhesive strength of the coating to a nickel-chromium alloy substrate was increased by about 40 percent by a heat treatment of 20 hours at 650 C.
Roberts, Miguel E; Han, Kyunghee; Weed, Nathan C
2006-09-01
This study documents the development of an MMPI-2 scale designed to assess features of the Korean culture-bound syndrome, Hwa-Byung (HB). An American research team and psychiatric practitioners in Korea created an 18-item HB scale via rational item selection and psycho-metric refinement. Principal components analysis of scale items revealed four components, reflecting content domains of general health, gastrointestinal symptoms, hopelessness, and anger. This four-component solution applied well to both Korean men and women, but not to an American sample. Although some findings were encouraging, future studies employing clinical samples are needed to provide further validation of this scale.
Planning Models for Tuberculosis Control Programs
Chorba, Ronald W.; Sanders, J. L.
1971-01-01
A discrete-state, discrete-time simulation model of tuberculosis is presented, with submodels of preventive interventions. The model allows prediction of the prevalence of the disease over the simulation period. Preventive and control programs and their optimal budgets may be planned by using the model for cost-benefit analysis: costs are assigned to the program components and disease outcomes to determine the ratio of program expenditures to future savings on medical and socioeconomic costs of tuberculosis. Optimization is achieved by allocating funds in successive increments to alternative program components in simulation and identifying those components that lead to the greatest reduction in prevalence for the given level of expenditure. The method is applied to four hypothetical disease prevalence situations. PMID:4999448
Sørensen, Hans Eibe; Slater, Stanley F
2008-08-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.
A novel principal component analysis for spatially misaligned multivariate air pollution data.
Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A
2017-01-01
We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Automating Risk Analysis of Software Design Models
Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688
Automating risk analysis of software design models.
Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.
Narváez-Rivas, M; Pablos, F; Jurado, J M; León-Camacho, M
2011-02-01
The composition of volatile components of subcutaneous fat from Iberian pig has been studied. Purge and trap gas chromatography-mass spectrometry has been used. The composition of the volatile fraction of subcutaneous fat has been used for authentication purposes of different types of Iberian pig fat. Three types of this product have been considered, montanera, extensive cebo and intensive cebo. With classification purposes, several pattern recognition techniques have been applied. In order to find out possible tendencies in the sample distribution as well as the discriminant power of the variables, principal component analysis was applied as visualisation technique. Linear discriminant analysis (LDA) and soft independent modelling by class analogy (SIMCA) were used to obtain suitable classification models. LDA and SIMCA allowed the differentiation of three fattening diets by using the contents in 2,2,4,6,6-pentamethyl-heptane, m-xylene, 2,4-dimethyl-heptane, 6-methyl-tridecane, 1-methoxy-2-propanol, isopropyl alcohol, o-xylene, 3-ethyl-2,2-dimethyl-oxirane, 2,6-dimethyl-undecane, 3-methyl-3-pentanol and limonene.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
Reliability considerations for the total strain range version of strainrange partitioning
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y. T.
1984-01-01
A proposed total strainrange version of strainrange partitioning (SRP) to enhance the manner in which SRP is applied to life prediction is considered with emphasis on how advanced reliability technology can be applied to perform risk analysis and to derive safety check expressions. Uncertainties existing in the design factors associated with life prediction of a component which experiences the combined effects of creep and fatigue can be identified. Examples illustrate how reliability analyses of such a component can be performed when all design factors in the SRP model are random variables reflecting these uncertainties. The Rackwitz-Fiessler and Wu algorithms are used and estimates of the safety index and the probablity of failure are demonstrated for a SRP problem. Methods of analysis of creep-fatigue data with emphasis on procedures for producing synoptic statistics are presented. An attempt to demonstrate the importance of the contribution of the uncertainties associated with small sample sizes (fatique data) to risk estimates is discussed. The procedure for deriving a safety check expression for possible use in a design criteria document is presented.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Chad
2006-01-01
This report investigates the utility of the Hilbert Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this report is to demonstrate the potential applications of the Hilbert Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F-18 Active Aeroelastic Wing airplane, an Aerostructures Test Wing, and pitch plunge simulation.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Marty; Prazenica, Chad
2005-01-01
This paper investigates the utility of the Hilbert-Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert-Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert-Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this paper is to demonstrate the potential applications of the Hilbert-Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized/online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F/A-18 Active Aeroelastic Wing aircraft, an Aerostructures Test Wing, and pitch-plunge simulation.
Li, Yan; Zhang, Ji; Zhao, Yanli; Liu, Honggao; Wang, Yuanzhong; Jin, Hang
2016-01-01
In this study the geographical differentiation of dried sclerotia of the medicinal mushroom Wolfiporia extensa, obtained from different regions in Yunnan Province, China, was explored using Fourier-transform infrared (FT-IR) spectroscopy coupled with multivariate data analysis. The FT-IR spectra of 97 samples were obtained for wave numbers ranging from 4000 to 400 cm-1. Then, the fingerprint region of 1800-600 cm-1 of the FT-IR spectrum, rather than the full spectrum, was analyzed. Different pretreatments were applied on the spectra, and a discriminant analysis model based on the Mahalanobis distance was developed to select an optimal pretreatment combination. Two unsupervised pattern recognition procedures- principal component analysis and hierarchical cluster analysis-were applied to enhance the authenticity of discrimination of the specimens. The results showed that excellent classification could be obtained after optimizing spectral pretreatment. The tested samples were successfully discriminated according to their geographical locations. The chemical properties of dried sclerotia of W. extensa were clearly dependent on the mushroom's geographical origins. Furthermore, an interesting finding implied that the elevations of collection areas may have effects on the chemical components of wild W. extensa sclerotia. Overall, this study highlights the feasibility of FT-IR spectroscopy combined with multivariate data analysis in particular for exploring the distinction of different regional W. extensa sclerotia samples. This research could also serve as a basis for the exploitation and utilization of medicinal mushrooms.
Nagasaka, Kei; Mizuno, Koji; Ito, Daisuke; Saida, Naoya
2017-05-29
In car crashes, the passenger compartment deceleration significantly influences the occupant loading. Hence, it is important to consider how each structural component deforms in order to control the passenger compartment deceleration. In frontal impact tests, the passenger compartment deceleration depends on the energy absorption property of the front structures. However, at this point in time there are few papers describing the components' quantitative contributions on the passenger compartment deceleration. Generally, the cross-sectional force is used to examine each component's contribution to passenger compartment deceleration. However, it is difficult to determine each component's contribution based on the cross-sectional forces, especially within segments of the individual members itself such as the front rails, because the force is transmitted continuously and the cross-sectional forces remain the same through the component. The deceleration of a particle can be determined from the derivative of the kinetic energy. Using this energy-derivative method, the contribution of each component on the passenger compartment deceleration can be determined. Using finite element (FE) car models, this method was applied for full-width and offset impact tests. This method was also applied to evaluate the deceleration of the powertrain. The finite impulse response (FIR) coefficient of the vehicle deceleration (input) and the driver chest deceleration (output) was calculated from Japan New Car Assessment Program (JNCAP) tests. These were applied to the component's contribution on the vehicle deceleration in FE analysis, and the component's contribution to the deceleration of the driver's chest was determined. The sum of the contribution of each component coincides with the passenger compartment deceleration in all types of impacts; therefore, the validity of this method was confirmed. In the full-width impact, the contribution of the crush box was large in the initial phases, and the contribution of the passenger compartment was large in the final phases. For the powertrain deceleration, the crush box had a positive contribution and the passenger compartment had a negative contribution. In the offset test, the contribution of the honeycomb and the passenger compartment deformation to the passenger compartment deceleration was large. Based on the FIR analysis, the passenger compartment deformation contributed the most to the chest deceleration of the driver dummy in the full-width impact. Based on the energy-derivative method, the contribution of the components' deformation to deceleration of the passenger compartment can be calculated for various types of crash configurations more easily, directly, and quantitatively than by using conventional methods. In addition, by combining the energy-derivative method and FIR, each structure's contribution to the occupant deceleration can be obtained. The energy-derivative method is useful in investigating how the deceleration develops from component deformations and also in designing deceleration curves for various impact configurations.
Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C
2004-09-08
Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics
NASA Astrophysics Data System (ADS)
Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C.
2004-09-01
Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of α-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Cα coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of α-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of α-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins.
Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A
2006-01-01
Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151
Li, Yong-Wei; Qi, Jin; Wen-Zhang; Zhou, Shui-Ping; Yan-Wu; Yu, Bo-Yang
2014-07-01
Liriope muscari (Decne.) L. H. Bailey is a well-known traditional Chinese medicine used for treating cough and insomnia. There are few reports on the quality evaluation of this herb partly because the major steroid saponins are not readily identified by UV detectors and are not easily isolated due to the existence of many similar isomers. In this study, a qualitative and quantitative method was developed to analyze the major components in L. muscari (Decne.) L. H. Bailey roots. Sixteen components were deduced and identified primarily by the information obtained from ultra high performance liquid chromatography with ion-trap time-of-flight mass spectrometry. The method demonstrated the desired specificity, linearity, stability, precision, and accuracy for simultaneous determination of 15 constituents (13 steroidal glycosides, 25(R)-ruscogenin, and pentylbenzoate) in 26 samples from different origins. The fingerprint was established, and the evaluation was achieved using similarity analysis and principal component analysis of 15 fingerprint peaks from 26 samples by ultra high performance liquid chromatography. The results from similarity analysis were consistent with those of principal component analysis. All results suggest that the established method could be applied effectively to the determination of multi-ingredients and fingerprint analysis of steroid saponins for quality assessment and control of L. muscari (Decne.) L. H. Bailey. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Cheek, Jimmy G.; McGhee, Max B.
An activity was undertaken to develop written criterion-referenced tests for the forestry component of Applied Principles of Agribusiness and Natural Resources. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist of three components, with…
Espeland, Mark A; Bray, George A; Neiberg, Rebecca; Rejeski, W Jack; Knowler, William C; Lang, Wei; Cheskin, Lawrence J; Williamson, Don; Lewis, C Beth; Wing, Rena
2009-10-01
To demonstrate how principal components analysis can be used to describe patterns of weight changes in response to an intensive lifestyle intervention. Principal components analysis was applied to monthly percent weight changes measured on 2,485 individuals enrolled in the lifestyle arm of the Action for Health in Diabetes (Look AHEAD) clinical trial. These individuals were 45 to 75 years of age, with type 2 diabetes and body mass indices greater than 25 kg/m(2). Associations between baseline characteristics and weight loss patterns were described using analyses of variance. Three components collectively accounted for 97.0% of total intrasubject variance: a gradually decelerating weight loss (88.8%), early versus late weight loss (6.6%), and a mid-year trough (1.6%). In agreement with previous reports, each of the baseline characteristics we examined had statistically significant relationships with weight loss patterns. As examples, males tended to have a steeper trajectory of percent weight loss and to lose weight more quickly than women. Individuals with higher hemoglobin A(1c) (glycosylated hemoglobin; HbA(1c)) tended to have a flatter trajectory of percent weight loss and to have mid-year troughs in weight loss compared to those with lower HbA(1c). Principal components analysis provided a coherent description of characteristic patterns of weight changes and is a useful vehicle for identifying their correlates and potentially for predicting weight control outcomes.
Konaté, Ahmed Amara; Ma, Huolin; Pan, Heping; Qin, Zhen; Ahmed, Hafizullah Abba; Dembele, N'dji Dit Jacques
2017-10-01
The availability of a deep well that penetrates deep into the Ultra High Pressure (UHP) metamorphic rocks is unusual and consequently offers a unique chance to study the metamorphic rocks. One such borehole is located in the southern part of Donghai County in the Sulu UHP metamorphic belt of Eastern China, from the Chinese Continental Scientific Drilling Main hole. This study reports the results obtained from the analysis of oxide log data. A geochemical logging tool provides in situ, gamma ray spectroscopy measurements of major and trace elements in the borehole. Dry weight percent oxide concentration logs obtained for this study were SiO 2 , K 2 O, TiO 2 , H 2 O, CO 2 , Na 2 O, Fe 2 O 3 , FeO, CaO, MnO, MgO, P 2 O 5 and Al 2 O 3 . Cross plot and Principal Component Analysis methods were applied for lithology characterization and mineralogy description respectively. Cross plot analysis allows lithological variations to be characterized. Principal Component Analysis shows that the oxide logs can be summarized by two components related to the feldspar and hydrous minerals. This study has shown that geochemical logging tool data is accurate and adequate to be tremendously useful in UHP metamorphic rocks analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shambira, Gerald; Gombe, Notion Tafara; Hall, Casey Daniel; Park, Meeyoung Mattie; Frimpong, Joseph Asamoah
2017-01-01
The government of Zimbabwe began providing antiretroviral therapy (ART) to People Living with HIV/AIDS (PLHIV) in public institutions in 2004. In Midlands province two clinics constituted the most active HIV care service points, with patients being followed up through a comprehensive patient monitoring and tracking system which captured specific patient variables and outcomes over time. The data from 2006 to 2011 were subjected to analysis to answer specific research questions and this case study is based on that analysis. The goal of this case study is to build participants' capacity to undertake secondary data analysis and interpretation using a dataset for HIV antiretroviral therapy in Zimbabwe and to draw conclusions which inform recommendations. Case studies in applied epidemiology allow students to practice applying epidemiologic skills in the classroom to address real-world public health problems. Case studies as a vital component of an applied epidemiology curriculum are instrumental in reinforcing principles and skills covered in lectures or in background reading. The target audience includes Field Epidemiology and Laboratory Training Programs (FELTPs), university students, district health executives, and health information officers.
Dual-energy x-ray image decomposition by independent component analysis
NASA Astrophysics Data System (ADS)
Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang
2001-09-01
The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.
Theoretical and experimental separation dynamics in capillary zone electrophoresis
NASA Technical Reports Server (NTRS)
Thormann, Wolfgang; Michaud, Jon-Pierre; Mosher, Richard A.
1986-01-01
The mathematical model of Bier et al. (1983) is used in a computer aided analysis of the conditions in capillary zone electrophoresis (ZE) under which sample zones migrate noninteractively with the carrier electrolyte. The monitoring of sample zones with a capillary analyzer that features both on-line conductivity and UV detection at the end of the separation trough is discussed. Data from a ZE analysis of a 5-component mixture are presented, and it is noted that all five components can be monitored via their conductivity change if enough sample is present. It is suggested from the results that the concentration ratio of background buffer to sample should be a minimum of 100:1 to effectively apply the plate concept to ZE.
NASA Astrophysics Data System (ADS)
Jia, S.
2015-12-01
As an effective method of extracting land cover fractions based on spectral endmembers, spectral mixture analysis (SMA) has been applied using remotely sensed imagery in different spatial, temporal, and spectral resolutions. A number of studies focused on arid/semiarid ecosystem have used SMA to obtain the land cover fractions of GV, NPV/litter, and bare soil (BS) using MODIS reflectance products to understand ecosystem phenology, track vegetation dynamics, and evaluate the impact of major disturbances. However, several challenges remain in the application of SMA in studying ecosystem phenology, including obtaining high quality endmembers and increasing computational efficiency when considering to long time series that cover a broad spatial extent. Okin (2007) proposes a variation of SMA, named as relative spectra mixture analysis (RSMA) to address the latter challenge by calculating the relative change of fraction of GV, NPV/litter, and BS compared with a baseline date. This approach assumes that the baseline image contains the spectral information of the bare soil that can be used as an endmember for spectral mixture analysis though it is mixed with the spectral reflectance of other non-soil land cover types. Using the baseline image, one can obtain the change of fractions of GV, NPV/litter, BS, and snow compared with the baseline image. However, RSMA results depend on the selection of baseline date and the fractional components during this date. In this study, we modified the strategy of implementing RSMA by introducing a step of obtaining a soil map as the baseline image using multiple-endmember SMA (MESMA) before applying RSMA. The fractions of land cover components from this modified RSMA are also validated using the field observations from two study area in semiarid savanna and grassland of Queensland, Australia.
Prediction of River Flooding using Geospatial and Statistical Analysis in New York, USA and Kent, UK
NASA Astrophysics Data System (ADS)
Marsellos, A.; Tsakiri, K.; Smith, M.
2014-12-01
Flooding in the rivers normally occurs during periods of excessive precipitation (i.e. New York, USA; Kent, UK) or ice jams during the winter period (New York, USA). For the prediction and mapping of the river flooding, it is necessary to evaluate the spatial distribution of the water (volume) in the river as well as study the interaction between the climatic and hydrological variables. Two study areas have been analyzed; one in Mohawk River, New York and one in Kent, United Kingdom (UK). A high resolution Digital Elevation Model (DEM) of the Mohawk River, New York has been used for a GIS flooding simulation to determine the maximum elevation value of the water that cannot continue to be restricted in the trunk stream and as a result flooding in the river may be triggered. The Flooding Trigger Level (FTL) is determined by incremental volumetric and surface calculations from Triangulated Irregular Network (TIN) with the use of GIS software and LiDAR data. The prediction of flooding in the river can also be improved by the statistical analysis of the hydrological and climatic variables in Mohawk River and Kent, UK. A methodology of time series analysis has been applied for the decomposition of the hydrological (water flow and ground water data) and climatic data in both locations. The KZ (Kolmogorov-Zurbenko) filter is used for the decomposition of the time series into the long, seasonal, and short term components. The explanation of the long term component of the water flow using the climatic variables has been improved up to 90% for both locations. Similar analysis has been performed for the prediction of the seasonal and short term component. This methodology can be applied for flooding of the rivers in multiple sites.
Geed, Shashwati; van Kan, Peter L. E.
2017-01-01
How are appropriate combinations of forelimb muscles selected during reach-to-grasp movements in the presence of neuromotor redundancy and important task-related constraints? The authors tested whether grasp type or target location preferentially influence the selection and synergistic coupling between forelimb muscles during reach-to-grasp movements. Factor analysis applied to 14–20 forelimb electromyograms recorded from monkeys performing reach-to-grasp tasks revealed 4–6 muscle components that showed transport/preshape- or grasp-related features. Weighting coefficients of transport/preshape-related components demonstrated strongest similarities for reaches that shared the same grasp type rather than the same target location. Scaling coefficients of transport/preshape- and grasp-related components showed invariant temporal coupling. Thus, grasp type influenced strongly both transport/preshape- and grasp-related muscle components, giving rise to grasp-based functional coupling between forelimb muscles. PMID:27589010
Impact of multilayered compression bandages on sub-bandage interface pressure: a model.
Al Khaburi, J; Nelson, E A; Hutchinson, J; Dehghani-Sanij, A A
2011-03-01
Multi-component medical compression bandages are widely used to treat venous leg ulcers. The sub-bandage interface pressures induced by individual components of the multi-component compression bandage systems are not always simply additive. Current models to explain compression bandage performance do not take account of the increase in leg circumference when each bandage is applied, and this may account for the difference between predicted and actual pressures. To calculate the interface pressure when a multi-component compression bandage system is applied to a leg. Use thick wall cylinder theory to estimate the sub-bandage pressure over the leg when a multi-component compression bandage is applied to a leg. A mathematical model was developed based on thick cylinder theory to include bandage thickness in the calculation of the interface pressure in multi-component compression systems. In multi-component compression systems, the interface pressure corresponds to the sum of the pressures applied by individual bandage layers. However, the change in the limb diameter caused by additional bandage layers should be considered in the calculation. Adding the interface pressure produced by single components without considering the bandage thickness will result in an overestimate of the overall interface pressure produced by the multi-component compression systems. At the ankle (circumference 25 cm) this error can be 19.2% or even more in the case of four components bandaging systems. Bandage thickness should be considered when calculating the pressure applied using multi-component compression systems.
Alizadeh Behbahani, Behrooz; Tabatabaei Yazdi, Farideh; Shahidi, Fakhri; Mortazavi, Seyed Ali; Mohebbi, Mohebbat
2017-04-01
Principle component analysis (PCA) was employed to examine the effect of the exerted treatments on the beef shelf life as well as discovering the correlations between the studied responses. Considering the variability of the dimensions of the responses, correlation coefficients were applied to form the matrix and extract the eigenvalue. Antimicrobial effect was evaluated on 10 pathogenic microorganisms through the methods of hole-plate diffusion method, disk diffusion method, pour plate method, minimum inhibitory concentration and minimum bactericidal/fungicidal concentration. Antioxidant potential and total phenolic content were examined through the method of 2,2-diphenyl-1-picrylhydrazyl (DPPH) and Folin-Ciocalteu method, respectively. The components were identified through gas chromatography and gas chromatography/mass spectrometry. Barhang seed mucilage (BSM) based edible coating containing 0, 0.5, 1 and 1.5% (w/w) Tarragon (T) essential oil mix were applied on beef slices to control the growth of pathogenic microorganisms. Microbiological (total viable count, psychrotrophic count, Escherichia coli, Staphylococcus aureus and fungi), chemical (thiobarbituric acid, peroxide value and pH) and sensory characteristics (odor, color and overall acceptability) analysis measurements were made during the storage periodically. PCA was employed to examine the effect of the exerted treatments on the beef shelf life as well as discovering the correlations between the studied responses. Considering the variability of the dimensions of the responses, correlation coefficients were applied to form the matrix and extract the eigenvalue. The PCA showed that the properties of the uncoated meat samples on the 9th, 12th, 15th and 18th days of storage are continuously changing independent of the exerted treatments on the other samples. This reveals the effect of the exerted treatments on the samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selective adsorption of flavor-active components on hydrophobic resins.
Saffarionpour, Shima; Sevillano, David Mendez; Van der Wielen, Luuk A M; Noordman, T Reinoud; Brouwer, Eric; Ottens, Marcel
2016-12-09
This work aims to propose an optimum resin that can be used in industrial adsorption process for tuning flavor-active components or removal of ethanol for producing an alcohol-free beer. A procedure is reported for selective adsorption of volatile aroma components from water/ethanol mixtures on synthetic hydrophobic resins. High throughput 96-well microtiter-plates batch uptake experimentation is applied for screening resins for adsorption of esters (i.e. isoamyl acetate, and ethyl acetate), higher alcohols (i.e. isoamyl alcohol and isobutyl alcohol), a diketone (diacetyl) and ethanol. The miniaturized batch uptake method is adapted for adsorption of volatile components, and validated with column breakthrough analysis. The results of single-component adsorption tests on Sepabeads SP20-SS are expressed in single-component Langmuir, Freundlich, and Sips isotherm models and multi-component versions of Langmuir and Sips models are applied for expressing multi-component adsorption results obtained on several tested resins. The adsorption parameters are regressed and the selectivity over ethanol is calculated for each tested component and tested resin. Resin scores for four different scenarios of selective adsorption of esters, higher alcohols, diacetyl, and ethanol are obtained. The optimal resin for adsorption of esters is Sepabeads SP20-SS with resin score of 87% and for selective removal of higher alcohols, XAD16N, and XAD4 from Amberlite resin series are proposed with scores of 80 and 74% respectively. For adsorption of diacetyl, XAD16N and XAD4 resins with score of 86% are the optimum choice and Sepabeads SP2MGS and XAD761 resins showed the highest affinity towards ethanol. Copyright © 2016 Elsevier B.V. All rights reserved.
Effects of modulation phase on profile analysis in normal-hearing and hearing-impaired listeners
NASA Astrophysics Data System (ADS)
Rogers, Deanna; Lentz, Jennifer
2003-04-01
The ability to discriminate between sounds with different spectral shapes in the presence of amplitude modulation was measured in normal-hearing and hearing-impaired listeners. The standard stimulus was the sum of equal-amplitude modulated tones, and the signal stimulus was generated by increasing the level of half the tones (up components) and decreasing the level of half the tones (down components). The down components had the same modulation phase, and a phase shift was applied to the up components to encourage segregation from the down tones. The same phase shift was used in both standard and signal stimuli. Profile-analysis thresholds were measured as a function of the phase shift between up and down components. The phase shifts were 0, 30, 45, 60, 90, and 180 deg. As expected, thresholds were lowest when all tones had the same modulation phase and increased somewhat with increasing phase disparity. This small increase in thresholds was similar for both groups. These results suggest that hearing-impaired listeners are able to use modulation phase to group sounds in a manner similar to that of normal listeners. [Work supported by NIH (DC 05835).
NASA Astrophysics Data System (ADS)
MacMillan, D. S.; van Dam, T. M.
2009-04-01
Variations in the horizontal distribution of atmospheric mass induce displacements of the Earth's surface. Theoretical estimates of the amplitude of the surface displacement indicate that the predicted surface displacement is often large enough to be detected by current geodetic techniques. In fact, the effects of atmospheric pressure loading have been detected in Global Positioning System (GPS) coordinate time series [van Dam et al., 1994; Dong et al., 2002; Scherneck et al., 2003; Zerbini et al., 2004] and very long baseline interferometery (VLBI) coordinates [Rabble and Schuh, 1986; Manabe et al., 1991; van Dam and Herring, 1994; Schuh et al., 2003; MacMillan and Gipson, 1994; and Petrov and Boy, 2004]. Some of these studies applied the atmospheric displacement at the observation level and in other studies, the predicted atmospheric and observed geodetic surface displacements have been averaged over 24 hours. A direct comparison of observation level and 24 hour corrections has not been carried out for VLBI to determine if one or the other approach is superior. In this presentation, we address the following questions: 1) Is it better to correct geodetic data at the observation level rather than applying corrections averaged over 24 hours to estimated geodetic coordinates a posteriori? 2) At the sub-daily periods, the atmospheric mass signal is composed of two components: a tidal component and a non-tidal component. If observation level corrections reduce the scatter of VLBI data more than a posteriori correction, is it sufficient to only model the atmospheric tides or must the entire atmospheric load signal be incorporated into the corrections? 3) When solutions from different geodetic techniques (or analysis centers within a technique) are combined (e.g., for ITRF2008), not all solutions may have applied atmospheric loading corrections. Are any systematic effects on the estimated TRF introduced when atmospheric loading is applied?
Compliance analysis of a 3-DOF spindle head by considering gravitational effects
NASA Astrophysics Data System (ADS)
Li, Qi; Wang, Manxin; Huang, Tian; Chetwynd, Derek G.
2015-01-01
The compliance modeling is one of the most significant issues in the stage of preliminary design for parallel kinematic machine(PKM). The gravity ignored in traditional compliance analysis has a significant effect on pose accuracy of tool center point(TCP) when a PKM is horizontally placed. By taking gravity into account, this paper presents a semi-analytical approach for compliance analysis of a 3-DOF spindle head named the A3 head. The architecture behind the A3 head is a 3-R PS parallel mechanism having one translational and two rotational movement capabilities, which can be employed to form the main body of a 5-DOF hybrid kinematic machine especially designed for high-speed machining of large aircraft components. The force analysis is carried out by considering both the externally applied wrench imposed upon the platform as well as gravity of all moving components. Then, the deflection analysis is investigated to establish the relationship between the deflection twist and compliances of all joints and links using semi-analytical method. The merits of this approach lie in that platform deflection twist throughout the entire task workspace can be evaluated in a very efficient manner. The effectiveness of the proposed approach is verified by the FEA and experiment at different configurations and the results show that the discrepancy of the compliances is less than 0.04 μm/N-1 and that of the deformations is less than 10μm. The computational and experimental results show that the deflection twist induced by gravity forces of the moving components has significant bearings on pose accuracy of the platform, providing an informative guidance for the improvement of the current design. The proposed approach can be easily applied to the compliance analysis of PKM by considering gravitational effects and to evaluate the deformation caused by gravity throughout the entire workspace.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Durigon, Angelica; Lier, Quirijn de Jong van; Metselaar, Klaas
2016-10-01
To date, measuring plant transpiration at canopy scale is laborious and its estimation by numerical modelling can be used to assess high time frequency data. When using the model by Jacobs (1994) to simulate transpiration of water stressed plants it needs to be reparametrized. We compare the importance of model variables affecting simulated transpiration of water stressed plants. A systematic literature review was performed to recover existing parameterizations to be tested in the model. Data from a field experiment with common bean under full and deficit irrigation were used to correlate estimations to forcing variables applying principal component analysis. New parameterizations resulted in a moderate reduction of prediction errors and in an increase in model performance. Ags model was sensitive to changes in the mesophyll conductance and leaf angle distribution parameterizations, allowing model improvement. Simulated transpiration could be separated in temporal components. Daily, afternoon depression and long-term components for the fully irrigated treatment were more related to atmospheric forcing variables (specific humidity deficit between stomata and air, relative air humidity and canopy temperature). Daily and afternoon depression components for the deficit-irrigated treatment were related to both atmospheric and soil dryness, and long-term component was related to soil dryness.
Using recurrence plot analysis for software execution interpretation and fault detection
NASA Astrophysics Data System (ADS)
Mosdorf, M.
2015-09-01
This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.
Dynamic of consumer groups and response of commodity markets by principal component analysis
NASA Astrophysics Data System (ADS)
Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo
2017-09-01
This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.
Concurrent white matter bundles and grey matter networks using independent component analysis.
O'Muircheartaigh, Jonathan; Jbabdi, Saad
2018-04-15
Developments in non-invasive diffusion MRI tractography techniques have permitted the investigation of both the anatomy of white matter pathways connecting grey matter regions and their structural integrity. In parallel, there has been an expansion in automated techniques aimed at parcellating grey matter into distinct regions based on functional imaging. Here we apply independent component analysis to whole-brain tractography data to automatically extract brain networks based on their associated white matter pathways. This method decomposes the tractography data into components that consist of paired grey matter 'nodes' and white matter 'edges', and automatically separates major white matter bundles, including known cortico-cortical and cortico-subcortical tracts. We show how this framework can be used to investigate individual variations in brain networks (in terms of both nodes and edges) as well as their associations with individual differences in behaviour and anatomy. Finally, we investigate correspondences between tractography-based brain components and several canonical resting-state networks derived from functional MRI. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Using Networks To Understand Medical Data: The Case of Class III Malocclusions
Scala, Antonio; Auconi, Pietro; Scazzocchio, Marco; Caldarelli, Guido; McNamara, James A.; Franchi, Lorenzo
2012-01-01
A system of elements that interact or regulate each other can be represented by a mathematical object called a network. While network analysis has been successfully applied to high-throughput biological systems, less has been done regarding their application in more applied fields of medicine; here we show an application based on standard medical diagnostic data. We apply network analysis to Class III malocclusion, one of the most difficult to understand and treat orofacial anomaly. We hypothesize that different interactions of the skeletal components can contribute to pathological disequilibrium; in order to test this hypothesis, we apply network analysis to 532 Class III young female patients. The topology of the Class III malocclusion obtained by network analysis shows a strong co-occurrence of abnormal skeletal features. The pattern of these occurrences influences the vertical and horizontal balance of disharmony in skeletal form and position. Patients with more unbalanced orthodontic phenotypes show preponderance of the pathological skeletal nodes and minor relevance of adaptive dentoalveolar equilibrating nodes. Furthermore, by applying Power Graphs analysis we identify some functional modules among orthodontic nodes. These modules correspond to groups of tightly inter-related features and presumably constitute the key regulators of plasticity and the sites of unbalance of the growing dentofacial Class III system. The data of the present study show that, in their most basic abstraction level, the orofacial characteristics can be represented as graphs using nodes to represent orthodontic characteristics, and edges to represent their various types of interactions. The applications of this mathematical model could improve the interpretation of the quantitative, patient-specific information, and help to better targeting therapy. Last but not least, the methodology we have applied in analyzing orthodontic features can be applied easily to other fields of the medical science. PMID:23028552
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
NASA Astrophysics Data System (ADS)
Ichinose, G. A.; Saikia, C. K.
2007-12-01
We applied the moment tensor (MT) analysis scheme to identify seismic sources using regional seismograms based on the representation theorem for the elastic wave displacement field. This method is applied to estimate the isotropic (ISO) and deviatoric MT components of earthquake, volcanic, and isotropic sources within the Basin and Range Province (BRP) and western US. The ISO components from Hoya, Bexar, Montello and Junction were compared to recently well recorded recent earthquakes near Little Skull Mountain, Scotty's Junction, Eureka Valley, and Fish Lake Valley within southern Nevada. We also examined "dilatational" sources near Mammoth Lakes Caldera and two mine collapses including the August 2007 event in Utah recorded by US Array. Using our formulation we have first implemented the full MT inversion method on long period filtered regional data. We also applied a grid-search technique to solve for the percent deviatoric and %ISO moments. By using the grid-search technique, high-frequency waveforms are used with calibrated velocity models. We modeled the ISO and deviatoric components (spall and tectonic release) as separate events delayed in time or offset in space. Calibrated velocity models helped the resolution of the ISO components and decrease the variance over the average, initial or background velocity models. The centroid location and time shifts are velocity model dependent. Models can be improved as was done in previously published work in which we used an iterative waveform inversion method with regional seismograms from four well recorded and constrained earthquakes. The resulting velocity models reduced the variance between predicted synthetics by about 50 to 80% for frequencies up to 0.5 Hz. Tests indicate that the individual path-specific models perform better at recovering the earthquake MT solutions even after using a sparser distribution of stations than the average or initial models.
The analysis of ensembles of moderately saturated interstellar lines
NASA Technical Reports Server (NTRS)
Jenkins, E. B.
1986-01-01
It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented.
Determination of molecular weight distributions in native and pretreated wood.
Leskinen, Timo; Kelley, Stephen S; Argyropoulos, Dimitris S
2015-03-30
The analysis of native wood components by size-exclusion chromatography (SEC) is challenging. Isolation, derivatization and solubilization of wood polymers is required prior to the analysis. The present approach allowed the determination of molecular weight distributions of the carbohydrates and of lignin in native and processed woods, without preparative component isolation steps. For the first time a component selective SEC analysis of sawdust preparations was made possible by the combination of two selective derivatization methods, namely; ionic liquid assisted benzoylation of the carbohydrate fraction and acetobromination of the lignin in acetic acid media. These were optimized for wood samples. The developed method was thus used to examine changes in softwood samples after degradative mechanical and/or chemical treatments, such as ball milling, steam explosion, green liquor pulping, and chemical oxidation with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ). The methodology can also be applied to examine changes in molecular weight and lignin-carbohydrate linkages that occur during wood-based biorefinery operations, such as pretreatments, and enzymatic saccharification. Copyright © 2014 Elsevier Ltd. All rights reserved.
An enhanced trend surface analysis equation for regional-residual separation of gravity data
NASA Astrophysics Data System (ADS)
Obasi, A. I.; Onwuemesi, A. G.; Romanus, O. M.
2016-12-01
Trend surface analysis is a geological term for a mathematical technique which separates a given map set into a regional component and a local component. This work has extended the steps for the derivation of the constants in the trend surface analysis equation from the popularly known matrix and simultaneous form to a more simplified and easily achievable format. To achieve this, matrix inversion was applied to the existing equations and the outcome was tested for suitability using a large volume of gravity data set acquired from the Anambra Basin, south-eastern Nigeria. Tabulation of the field data set was done using the Microsoft Excel spread sheet, while gravity maps were generated from the data set using Oasis Montaj software. A comparison of the residual gravity map produced using the new equations with its software derived counterpart has shown that the former has a higher enhancing capacity than the latter. This equation has shown strong suitability for application in the separation of gravity data sets into their regional and residual components.
Mao, Zhi-Hua; Yin, Jian-Hua; Zhang, Xue-Xi; Wang, Xiao; Xia, Yang
2016-01-01
Fourier transform infrared spectroscopic imaging (FTIRI) technique can be used to obtain the quantitative information of content and spatial distribution of principal components in cartilage by combining with chemometrics methods. In this study, FTIRI combining with principal component analysis (PCA) and Fisher’s discriminant analysis (FDA) was applied to identify the healthy and osteoarthritic (OA) articular cartilage samples. Ten 10-μm thick sections of canine cartilages were imaged at 6.25μm/pixel in FTIRI. The infrared spectra extracted from the FTIR images were imported into SPSS software for PCA and FDA. Based on the PCA result of 2 principal components, the healthy and OA cartilage samples were effectively discriminated by the FDA with high accuracy of 94% for the initial samples (training set) and cross validation, as well as 86.67% for the prediction group. The study showed that cartilage degeneration became gradually weak with the increase of the depth. FTIRI combined with chemometrics may become an effective method for distinguishing healthy and OA cartilages in future. PMID:26977354
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Chavez, P.S.; Kwarteng, A.Y.
1989-01-01
A challenge encountered with Landsat Thematic Mapper (TM) data, which includes data from size reflective spectral bands, is displaying as much information as possible in a three-image set for color compositing or digital analysis. Principal component analysis (PCA) applied to the six TM bands simultaneously is often used to address this problem. However, two problems that can be encountered using the PCA method are that information of interest might be mathematically mapped to one of the unused components and that a color composite can be difficult to interpret. "Selective' PCA can be used to minimize both of these problems. The spectral contrast among several spectral regions was mapped for a northern Arizona site using Landsat TM data. Field investigations determined that most of the spectral contrast seen in this area was due to one of the following: the amount of iron and hematite in the soils and rocks, vegetation differences, standing and running water, or the presence of gypsum, which has a higher moisture retention capability than do the surrounding soils and rocks. -from Authors
Analysis of Alternatives for Dismantling of the Equipment in Building 117/1 at Ignalina NPP - 13278
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poskas, Povilas; Simonis, Audrius; Poskas, Gintautas
2013-07-01
Ignalina NPP was operating two RBMK-1500 reactors which are under decommissioning now. In this paper dismantling alternatives of the equipment in Building 117/1 are analyzed. After situation analysis and collection of the primary information related to components' physical and radiological characteristics, location and other data, two different alternatives for dismantling of the equipment are formulated - the first (A1), when major components (vessels and pipes of Emergency Core Cooling System - ECCS) are segmented/halved in situ using flame cutting (oxy-acetylene) and the second one (A2), when these components are segmented/halved at the workshop using CAMC (Contact Arc Metal Cutting) technique.more » To select the preferable alternative MCDA method - AHP (Analytic Hierarchy Process) is applied. Hierarchical list of decision criteria, necessary for assessment of alternatives performance, are formulated. Quantitative decision criteria values for these alternatives are calculated using software DECRAD, which was developed by Lithuanian Energy Institute Nuclear engineering laboratory. While qualitative decision criteria are evaluated using expert judgment. Analysis results show that alternative A1 is better than alternative A2. (authors)« less
NASA Astrophysics Data System (ADS)
Naritomi, Yusuke; Fuchigami, Sotaro
2011-02-01
Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.
Naritomi, Yusuke; Fuchigami, Sotaro
2011-02-14
Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.
Rapid analysis of controlled substances using desorption electrospray ionization mass spectrometry.
Rodriguez-Cruz, Sandra E
2006-01-01
The recently developed technique of desorption electrospray ionization (DESI) has been applied to the rapid analysis of controlled substances. Experiments have been performed using a commercial ThermoFinnigan LCQ Advantage MAX ion-trap mass spectrometer with limited modifications. Results from the ambient sampling of licit and illicit tablets demonstrate the ability of the DESI technique to detect the main active ingredient(s) or controlled substance(s), even in the presence of other higher-concentration components. Full-scan mass spectrometry data provide preliminary identification by molecular weight determination, while rapid analysis using the tandem mass spectrometry (MS/MS) mode provides fragmentation data which, when compared to the laboratory-generated ESI-MS/MS spectral library, provide structural information and final identification of the active ingredient(s). The consecutive analysis of tablets containing different active components indicates there is no cross-contamination or interference from tablet to tablet, demonstrating the reliability of the DESI technique for rapid sampling (one tablet/min or better). Active ingredients have been detected for tablets in which the active component represents less than 1% of the total tablet weight, demonstrating the sensitivity of the technique. The real-time sampling of cannabis plant material is also presented.
Designers workbench: toward real-time immersive modeling
NASA Astrophysics Data System (ADS)
Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu
2000-05-01
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Roblová, Vendula; Bittová, Miroslava; Kubáň, Petr; Kubáň, Vlastimil
2016-07-01
In this work aqueous infusions from ten Mentha herbal samples (four different Mentha species and six hybrids of Mentha x piperita) and 20 different peppermint teas were screened by capillary electrophoresis with UV detection. The fingerprint separation was accomplished in a 25 mM borate background electrolyte with 10% methanol at pH 9.3. The total polyphenolic content in the extracts was determined spectrophotometrically at 765 nm by a Folin-Ciocalteu phenol assay. Total antioxidant activity was determined by scavenging of 2,2-diphenyl-1-picrylhydrazyl radical at 515 nm. The peak areas of 12 dominant peaks from CE analysis, present in all samples, and the value of total polyphenolic content and total antioxidant activity obtained by spectrophotometry was combined into a single data matrix and principal component analysis was applied. The obtained principal component analysis model resulted in distinct clusters of Mentha and peppermint tea samples distinguishing the samples according to their potential protective antioxidant effect. Principal component analysis, using a non-targeted approach with no need for compound identification, was found as a new promising tool for the screening of herbal tea products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
1987-01-01
two nodes behave identically. In GRASP, these constraints are entirely invisible from the user’s point of view. GRASP (Recall that the Levi - Civita ...virtual rotation GRASP is the first program implementing a new methodWl( = Levi -Ciudta symbol op for dynamic analysis of structures, parts of which may...natural coordinatization of sis for this methodology, which incorporates body flexibility components. with the large discrete motions previously
Halouska, Steven; Chacon, Ofelia; Fenton, Robert J.; Zinniel, Denise K.; Barletta, Raul G.; Powers, Robert
2008-01-01
D-cycloserine (DCS) is only used with multi-drug resistant strains of tuberculosis because of serious side-effects. DCS is known to inhibit cell wall biosynthesis, but the in vivo lethal target is still unknown. We have applied NMR-based metabolomics combined with principal component analysis to monitor the in vivo affect of DCS on M. smegmatis. Our analysis suggests DCS functions by inhibiting multiple protein targets. PMID:17979227
Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Caubet, Jordi; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In this paper we describe how to apply powerful performance analysis techniques to understand the behavior of multilevel parallel applications. We use the Paraver/OMPItrace performance analysis system for our study. This system consists of two major components: The OMPItrace dynamic instrumentation mechanism, which allows the tracing of processes and threads and the Paraver graphical user interface for inspection and analyses of the generated traces. We describe how to use the system to conduct a detailed comparative study of a benchmark code implemented in five different programming paradigms applicable for shared memory
Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C
2018-06-01
The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Santos, Sónia A O; Vilela, Carla; Freire, Carmen S R; Neto, Carlos Pascoal; Silvestre, Armando J D
2013-11-01
Ultra-high performance liquid chromatography (UHPLC) was applied for the first time in the analysis of wood extracts. The potential of this technique coupled to ion trap mass spectrometry in the rapid and effective detection and identification of bioactive components in complex vegetal samples was demonstrated. Several dozens of compounds were detected in less than 30min of analysis time, corresponding to more than 3-fold reduction in time, when compared to conventional HPLC analysis of similar extracts. The phenolic chemical composition of Eucalyptus grandis, Eucalyptus urograndis (E. grandis×E. urophylla) and Eucalyptus maidenii wood extracts was assessed for the first time, with the identification of 51 phenolic compounds in the three wood extracts. Twenty of these compounds are reported for the first time as Eucalyptus genus components. Ellagic acid and ellagic acid-pentoside are the major components in all extracts, followed by gallic and quinic acids in E. grandis and E. urograndis and ellagic acid-pentoside isomer, isorhamnetin-hexoside and gallic acid in E. maidenii. The antioxidant scavenging activity of the extracts was evaluated, with E. grandis wood extract showing the lowest IC50 value. Moreover, the antioxidant activity of these extracts was higher than that of the commercial antioxidant BHT and of those of the corresponding bark extracts. These results, together with the phenolic content values, open good perspectives for the exploitation of these renewable resources as a source of valuable phenolic compounds. Copyright © 2013 Elsevier B.V. All rights reserved.
Computational Flow Analysis of a Left Ventricular Assist Device
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan; Benkowski, Robert
1995-01-01
Computational fluid dynamics has been developed to a level where it has become an Indispensable part of aerospace research and design. Technology developed foe aerospace applications am also be utilized for the benefit of human health. For example, a flange-to-flange rocket engine fuel-pump simulation includes the rotating and non-rotating components: the flow straighteners, the impeller, and diffusers A Ventricular Assist Device developed by NASA Johnson Space Center and Baylor College of Medicine has a design similar to a rocket engine fuel pump in that it also consists of a flow straightener, an impeller, and a diffuser. Accurate and detailed knowledge of the flowfield obtained by incompressible flow calculations can be greatly beneficial to designers in their effort to reduce the cost and improve the reliability of these devices. In addition to the geometric complexities, a variety of flow phenomena are encountered in biofluids Then include turbulent boundary layer separation, wakes, transition, tip vortex resolution, three-dimensional effects, and Reynolds number effects. In order to increase the role of Computational Fluid Dynamics (CFD) in the design process the CFD analysis tools must be evaluated and validated so that designers gain Confidence in their use. The incompressible flow solver, INS3D, has been applied to flow inside of a liquid rocket engine turbopump components and extensively validated. This paper details how the computational flow simulation capability developed for liquid rocket engine pump component analysis has bean applied to the Left Ventricular Assist Device being developed jointly by NASA JSC and Baylor College of Medicine.
Ship Speed Retrieval From Single Channel TerraSAR-X Data
NASA Astrophysics Data System (ADS)
Soccorsi, Matteo; Lehner, Susanne
2010-04-01
A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.
Applying Rasch model analysis in the development of the cantonese tone identification test (CANTIT).
Lee, Kathy Y S; Lam, Joffee H S; Chan, Kit T Y; van Hasselt, Charles Andrew; Tong, Michael C F
2017-01-01
Applying Rasch analysis to evaluate the internal structure of a lexical tone perception test known as the Cantonese Tone Identification Test (CANTIT). A 75-item pool (CANTIT-75) with pictures and sound tracks was developed. Respondents were required to make a four-alternative forced choice on each item. A short version of 30 items (CANTIT-30) was developed based on fit statistics, difficulty estimates, and content evaluation. Internal structure was evaluated by fit statistics and Rasch Factor Analysis (RFA). 200 children with normal hearing and 141 children with hearing impairment were recruited. For CANTIT-75, all infit and 97% of outfit values were < 2.0. RFA revealed 40.1% of total variance was explained by the Rasch measure. The first residual component explained 2.5% of total variance in an eigenvalue of 3.1. For CANTIT-30, all infit and outfit values were < 2.0. The Rasch measure explained 38.8% of total variance, the first residual component explained 3.9% of total variance in an eigenvalue of 1.9. The Rasch model provides excellent guidance for the development of short forms. Both CANTIT-75 and CANTIT-30 possess satisfactory internal structure as a construct validity evidence in measuring the lexical tone identification ability of the Cantonese speakers.
Lee, Jong-Hwan; Oh, Sungsuk; Jolesz, Ferenc A.; Park, Hyunwook; Yoo, Seung-Schik
2010-01-01
The simultaneous acquisition of electroencephalogram (EEG) and functional MRI (fMRI) signals is potentially advantageous because of the superior resolution that is achieved in both the temporal and spatial domains, respectively. However, ballistocardiographic artifacts along with the ocular artifacts are a major obstacle for the detection of the EEG signatures of interest. Since the sources corresponding to these artifacts are independent from those producing the EEG signatures, we applied the Infomax-based independent component analysis (ICA) technique to separate the EEG signatures from the artifacts. The isolated EEG signatures were further utilized to model the canonical hemodynamic response functions (HRFs). Subsequently, the brain areas from which these EEG signatures originated were identified as locales of activation patterns from the analysis of fMRI data. Upon the identification and subsequent evaluation of brain areas generating interictal epileptic discharge (IED) spikes from an epileptic subject, the presented method was successfully applied to detect the theta- and alpha-rhythms that are sleep onset related EEG signatures along with the subsequent neural circuitries from a sleep deprived volunteer. These results suggest that the ICA technique may be useful for the preprocessing of simultaneous EEG-fMRI acquisitions, especially when a reference paradigm is unavailable. PMID:19922343
Lee, Jong-Hwan; Oh, Sungsuk; Jolesz, Ferenc A; Park, Hyunwook; Yoo, Seung-Schik
2009-01-01
The simultaneous acquisition of electroencephalogram (EEG) and functional MRI (fMRI) signals is potentially advantageous because of the superior resolution that is achieved in both the temporal and spatial domains, respectively. However, ballistocardiographic artifacts along with ocular artifacts are a major obstacle for the detection of the EEG signatures of interest. Since the sources corresponding to these artifacts are independent from those producing the EEG signatures, we applied the Infomax-based independent component analysis (ICA) technique to separate the EEG signatures from the artifacts. The isolated EEG signatures were further utilized to model the canonical hemodynamic response functions (HRFs). Subsequently, the brain areas from which these EEG signatures originated were identified as locales of activation patterns from the analysis of fMRI data. Upon the identification and subsequent evaluation of brain areas generating interictal epileptic discharge (IED) spikes from an epileptic subject, the presented method was successfully applied to detect the theta and alpha rhythms that are sleep onset-related EEG signatures along with the subsequent neural circuitries from a sleep-deprived volunteer. These results suggest that the ICA technique may be useful for the preprocessing of simultaneous EEG-fMRI acquisitions, especially when a reference paradigm is unavailable.
Hermida, Juan C; Flores-Hernandez, Cesar; Hoenecke, Heinz R; D'Lima, Darryl D
2014-03-01
This study undertook a computational analysis of a wedged glenoid component for correction of retroverted glenoid arthritic deformity to determine whether a wedge-shaped glenoid component design with a built-in correction for version reduces excessive stresses in the implant, cement, and glenoid bone. Recommendations for correcting retroversion deformity are asymmetric reaming of the anterior glenoid, bone grafting of the posterior glenoid, or a glenoid component with posterior augmentation. Eccentric reaming has the disadvantages of removing normal bone, reducing structural support for the glenoid component, and increasing the risk of bone perforation by the fixation pegs. Bone grafting to correct retroverted deformity does not consistently generate successful results. Finite element models of 2 scapulae models representing a normal and an arthritic retroverted glenoid were implanted with a standard glenoid component (in retroversion or neutral alignment) or a wedged component. Glenohumeral forces representing in vivo loading were applied and stresses and strains computed in the bone, cement, and glenoid component. The retroverted glenoid components generated the highest compressive stresses and decreased cyclic fatigue life predictions for trabecular bone. Correction of retroversion by the wedged glenoid component significantly decreased stresses and predicted greater bone fatigue life. The cement volume estimated to survive 10 million cycles was the lowest for the retroverted components and the highest for neutrally implanted glenoid components and for wedged components. A wedged glenoid implant is a viable option to correct severe arthritic retroversion, reducing the need for eccentric reaming and the risk for implant failure. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)
2002-01-01
When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.
Splice assembly tool and method of splicing
Silva, Frank A.
1980-01-01
A splice assembly tool for assembling component parts of an electrical conductor while producing a splice connection between electrical cables therewith, comprises a first structural member adaptable for supporting force applying means thereon, said force applying means enabling a rotary force applied manually thereto to be converted to a longitudinal force for subsequent application against a first component part of said electrical connection, a second structural member adaptable for engaging a second component part in a manner to assist said first structural member in assembling the component parts relative to one another and transmission means for conveying said longitudinal force between said first and said second structural members, said first and said second structural members being coupled to one another by said transmission means, wherein at least one of said component parts comprises a tubular elastomeric sleeve and said force applying means provides a relatively high mechanical advantage when said rotary force is applied thereto so as to facilitate assembly of said at least one tubular elastomeric sleeve about said other component part in an interference fit manner.
40 CFR 60.2891 - Do all components of these new source performance standards apply at the same time?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Do all components of these new source performance standards apply at the same time? 60.2891 Section 60.2891 Protection of Environment ENVIRONMENTAL... Applicability § 60.2891 Do all components of these new source performance standards apply at the same time? No...
ICA model order selection of task co-activation networks.
Ray, Kimberly L; McKay, D Reese; Fox, Peter M; Riedel, Michael C; Uecker, Angela M; Beckmann, Christian F; Smith, Stephen M; Fox, Peter T; Laird, Angela R
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders.
ICA model order selection of task co-activation networks
Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802
Hyperspectral imaging and multivariate analysis in the dried blood spots investigations
NASA Astrophysics Data System (ADS)
Majda, Alicja; Wietecha-Posłuszny, Renata; Mendys, Agata; Wójtowicz, Anna; Łydżba-Kopczyńska, Barbara
2018-04-01
The aim of this study was to apply a new methodology using the combination of the hyperspectral imaging and the dry blood spot (DBS) collecting. Application of the hyperspectral imaging is fast and non-destructive. DBS method offers the advantage also on the micro-invasive blood collecting and low volume of required sample. During experimental step, the reflected light was recorded by two hyperspectral systems. The collection of 776 spectral bands in the VIS-NIR range (400-1000 nm) and 256 spectral bands in the SWIR range (970-2500 nm) was applied. Pixel has the size of 8 × 8 and 30 × 30 µm for VIS-NIR and SWIR camera, respectively. The obtained data in the form of hyperspectral cubes were treated with chemometric methods, i.e., minimum noise fraction and principal component analysis. It has been shown that the application of these methods on this type of data, by analyzing the scatter plots, allows a rapid analysis of the homogeneity of DBS, and the selection of representative areas for further analysis. It also gives the possibility of tracking the dynamics of changes occurring in biological traces applied on the surface. For the analyzed 28 blood samples, described method allowed to distinguish those blood stains because of time of apply.
An Economic View of Food Deserts in the United States
ERIC Educational Resources Information Center
Bitler, Marianne; Haider, Steven J.
2011-01-01
Considerable policy and academic attention has been focused on the topic of food deserts. We consider this topic from an economic perspective. First, we consider how the components of a standard economic analysis apply to the study of food deserts. Second, using this economic lens, we revisit the empirical literature on food deserts to assess the…
ERIC Educational Resources Information Center
Pasquier, Jacques; Sachse, Matthias
Costing principles are applied to a university by estimating unit costs and their component factors for the university's different inputs, activities, and outputs. The information system used is designed for Fribourg University but could be applicable to other Swiss universities and could serve Switzerland's universities policy. In general, it…
Daniel J. Yelle
2017-01-01
Resorcinol-formaldehyde adhesives can reinforce stress fractures that appear from wood surface preparation. Researchers have found that applying the resorcinol-formaldehyde prepolymer, hydroxymethylated resorcinol is thought to plasticize lignin components and stabilize stress fractures through reactions with lignin subunits and hemicelluloses in wood. In this study, a...
Joseph McCollum; Dennis Jacobs
2005-01-01
The legal foundations of the FIA (Forest Inventory and Analysis) program are laid out. Upon those foundations are built a geographical definition of the United States and its components, and how applying that definition might change from decade to decade. Along the way, the American system of weights and measures as well as the unusual geography of the Commonwealth of...
Method for assessing motor insulation on operating motors
Kueck, J.D.; Otaduy, P.J.
1997-03-18
A method for monitoring the condition of electrical-motor-driven devices is disclosed. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques. 15 figs.
Yongqiang Liu
2003-01-01
The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...
Ugliano, Maurizio
2016-12-01
This work describes the application of disposable screen printed carbon paste sensors for the analysis of the main white wine oxidizable compounds as well as for the rapid fingerprinting and classification of white wines from different grape varieties. The response of individual white wine antioxidants such as flavanols, flavanol derivatives, phenolic acids, SO2 and ascorbic acid was first assessed in model wine. Analysis of commercial white wines gave voltammograms featuring two unresolved anodic waves corresponding to the oxidation of different compounds, mostly phenolic antioxidants. Calculation of the first order derivative of measured current vs. applied potential allowed resolving these two waves, highlighting the occurrence of several electrode processes corresponding to the oxidation of individual wine components. Through the application of Principal Component Analysis (PCA), derivative voltammograms were used to discriminate among wines of different varieties. Copyright © 2016 Elsevier Ltd. All rights reserved.
Methods proposed to achieve air quality standards for mobile sources and technology surveillance.
Piver, W T
1975-01-01
The methods proposed to meet the 1975 Standards of the Clean Air Act for mobile sources are alternative antiknocks, exhaust emission control devices, and alternative engine designs. Technology surveillance analysis applied to this situation is an attempt to anticipate potential public and environmental health problems from these methods, before they happen. Components of this analysis are exhaust emission characterization, environmental transport and transformation, levels of public and environmental exposure, and the influence of economics on the selection of alternative methods. The purpose of this presentation is to show trends as a result of the interaction of these different components. In no manner can these trends be interpreted explicitly as to what will really happen. Such an analysis is necessary so that public and environmental health officials have the opportunity to act on potential problems before they become manifest. PMID:50944
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
Multivariate statistical analysis of stream-sediment geochemistry in the Grazer Paläozoikum, Austria
Weber, L.; Davis, J.C.
1990-01-01
The Austrian reconnaissance study of stream-sediment composition — more than 30000 clay-fraction samples collected over an area of 40000 km2 — is summarized in an atlas of regional maps that show the distributions of 35 elements. These maps, rich in information, reveal complicated patterns of element abundance that are difficult to compare on more than a small number of maps at one time. In such a study, multivariate procedures such as simultaneous R-Q mode components analysis may be helpful. They can compress a large number of variables into a much smaller number of independent linear combinations. These composite variables may be mapped and relationships sought between them and geological properties. As an example, R-Q mode components analysis is applied here to the Grazer Paläozoikum, a tectonic unit northeast of the city of Graz, which is composed of diverse lithologies and contains many mineral deposits.
Giardina, M; Castiglia, F; Tomarchio, E
2014-12-01
Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
2012-05-01
In this article, we investigate the reliability of M-for-N (M:N) shared protection systems. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner under the condition that the failed units are not repairable. Mathematical analysis gives the closed-form solution of the reliability and mean time to failure (MTTF). We also analyse several numerical examples of the reliability and MTTF. This result can be applied, for example, to the analysis and design of an integrated circuit consisting of redundant backup components. In such a device, repairing a failed component is unrealistic. The analysis provides useful information for the design for general shared protection systems in which the failed units are not repaired.
Bispectral analysis of equatorial spread F density irregularities
NASA Technical Reports Server (NTRS)
Labelle, J.; Lund, E. J.
1992-01-01
Bispectral analysis has been applied to density irregularities at frequencies 5-30 Hz observed with a sounding rocket launched from Peru in March 1983. Unlike the power spectrum, the bispectrum contains statistical information about the phase relations between the Fourier components which make up the waveform. In the case of spread F data from 475 km the 5-30 Hz portion of the spectrum displays overall enhanced bicoherence relative to that of the background instrumental noise and to that expected due to statistical considerations, implying that the observed f exp -2.5 power law spectrum has a significant non-Gaussian component. This is consistent with previous qualitative analyses. The bicoherence has also been calculated for simulated equatorial spread F density irregularities in approximately the same wavelength regime, and the resulting bispectrum has some features in common with that of the rocket data. The implications of this analysis for equatorial spread F are discussed, and some future investigations are suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-08-01
NREL Analysis Insights mines our body of analysis work to synthesize topical insights and key findings. In this issue, we examine transportation systems, alternative fuels, and implications of increasing electrification of transit. Moving people and goods from point A to B has never been easier, but our current transportation systems also take a toll on our environment. Transportation currently accounts for 71% of total U.S. petroleum use and 33% of the nation’s total carbon emissions. With new technology, can we make our transportation system cleaner and more cost effective? NREL is applying its analytical expertise and imagination to do justmore » that. Solutions start with systems thinking. Connecting the dots between physical components - vehicles, fueling stations, and highways - and institutional components - traffic laws, regulations, and vehicle standards - helps illuminate solutions that address the needs of the transportation system's many stakeholders.« less
Dihedral angle principal component analysis of molecular dynamics simulations.
Altis, Alexandros; Nguyen, Phuong H; Hegger, Rainer; Stock, Gerhard
2007-06-28
It has recently been suggested by Mu et al. [Proteins 58, 45 (2005)] to use backbone dihedral angles instead of Cartesian coordinates in a principal component analysis of molecular dynamics simulations. Dihedral angles may be advantageous because internal coordinates naturally provide a correct separation of internal and overall motion, which was found to be essential for the construction and interpretation of the free energy landscape of a biomolecule undergoing large structural rearrangements. To account for the circular statistics of angular variables, a transformation from the space of dihedral angles {phi(n)} to the metric coordinate space {x(n)=cos phi(n),y(n)=sin phi(n)} was employed. To study the validity and the applicability of the approach, in this work the theoretical foundations underlying the dihedral angle principal component analysis (dPCA) are discussed. It is shown that the dPCA amounts to a one-to-one representation of the original angle distribution and that its principal components can readily be characterized by the corresponding conformational changes of the peptide. Furthermore, a complex version of the dPCA is introduced, in which N angular variables naturally lead to N eigenvalues and eigenvectors. Applying the methodology to the construction of the free energy landscape of decaalanine from a 300 ns molecular dynamics simulation, a critical comparison of the various methods is given.
Dihedral angle principal component analysis of molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Altis, Alexandros; Nguyen, Phuong H.; Hegger, Rainer; Stock, Gerhard
2007-06-01
It has recently been suggested by Mu et al. [Proteins 58, 45 (2005)] to use backbone dihedral angles instead of Cartesian coordinates in a principal component analysis of molecular dynamics simulations. Dihedral angles may be advantageous because internal coordinates naturally provide a correct separation of internal and overall motion, which was found to be essential for the construction and interpretation of the free energy landscape of a biomolecule undergoing large structural rearrangements. To account for the circular statistics of angular variables, a transformation from the space of dihedral angles {φn} to the metric coordinate space {xn=cosφn,yn=sinφn} was employed. To study the validity and the applicability of the approach, in this work the theoretical foundations underlying the dihedral angle principal component analysis (dPCA) are discussed. It is shown that the dPCA amounts to a one-to-one representation of the original angle distribution and that its principal components can readily be characterized by the corresponding conformational changes of the peptide. Furthermore, a complex version of the dPCA is introduced, in which N angular variables naturally lead to N eigenvalues and eigenvectors. Applying the methodology to the construction of the free energy landscape of decaalanine from a 300ns molecular dynamics simulation, a critical comparison of the various methods is given.
Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis
NASA Technical Reports Server (NTRS)
Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.;
2015-01-01
The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.
Warp-averaging event-related potentials.
Wang, K; Begleiter, H; Porjesz, B
2001-10-01
To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.
Gorgulho, B M; Pot, G K; Marchioni, D M
2017-05-01
The aim of this study was to evaluate the validity and reliability of the Main Meal Quality Index when applied on the UK population. The indicator was developed to assess meal quality in different populations, and is composed of 10 components: fruit, vegetables (excluding potatoes), ratio of animal protein to total protein, fiber, carbohydrate, total fat, saturated fat, processed meat, sugary beverages and desserts, and energy density, resulting in a score range of 0-100 points. The performance of the indicator was measured using strategies for assessing content validity, construct validity, discriminant validity and reliability, including principal component analysis, linear regression models and Cronbach's alpha. The indicator presented good reliability. The Main Meal Quality Index has been shown to be valid for use as an instrument to evaluate, monitor and compare the quality of meals consumed by adults in the United Kingdom.
Controlled Microwave Heating Accelerates Rolling Circle Amplification.
Yoshimura, Takeo; Suzuki, Takamasa; Mineki, Shigeru; Ohuchi, Shokichi
2015-01-01
Rolling circle amplification (RCA) generates single-stranded DNAs or RNA, and the diverse applications of this isothermal technique range from the sensitive detection of nucleic acids to analysis of single nucleotide polymorphisms. Microwave chemistry is widely applied to increase reaction rate as well as product yield and purity. The objectives of the present research were to apply microwave heating to RCA and indicate factors that contribute to the microwave selective heating effect. The microwave reaction temperature was strictly controlled using a microwave applicator optimized for enzymatic-scale reactions. Here, we showed that microwave-assisted RCA reactions catalyzed by either of the four thermostable DNA polymerases were accelerated over 4-folds compared with conventional RCA. Furthermore, the temperatures of the individual buffer components were specifically influenced by microwave heating. We concluded that microwave heating accelerated isothermal RCA of DNA because of the differential heating mechanisms of microwaves on the temperatures of reaction components, although the overall reaction temperatures were the same.
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
Selection of solubility parameters for characterization of pharmaceutical excipients.
Adamska, Katarzyna; Voelkel, Adam; Héberger, Károly
2007-11-09
The solubility parameter (delta(2)), corrected solubility parameter (delta(T)) and its components (delta(d), delta(p), delta(h)) were determined for series of pharmaceutical excipients by using inverse gas chromatography (IGC). Principal component analysis (PCA) was applied for the selection of the solubility parameters which assure the complete characterization of examined materials. Application of PCA suggests that complete description of examined materials is achieved with four solubility parameters, i.e. delta(2) and Hansen solubility parameters (delta(d), delta(p), delta(h)). Selection of the excipients through PCA of their solubility parameters data can be used for prediction of their behavior in a multi-component system, e.g. for selection of the best materials to form stable pharmaceutical liquid mixtures or stable coating formulation.
A baseline drift detrending technique for fast scan cyclic voltammetry.
DeWaele, Mark; Oh, Yoonbae; Park, Cheonho; Kang, Yu Min; Shin, Hojin; Blaha, Charles D; Bennet, Kevin E; Kim, In Young; Lee, Kendall H; Jang, Dong Pyo
2017-11-06
Fast scan cyclic voltammetry (FSCV) has been commonly used to measure extracellular neurotransmitter concentrations in the brain. Due to the unstable nature of the background currents inherent in FSCV measurements, analysis of FSCV data is limited to very short amounts of time using traditional background subtraction. In this paper, we propose the use of a zero-phase high pass filter (HPF) as the means to remove the background drift. Instead of the traditional method of low pass filtering across voltammograms to increase the signal to noise ratio, a HPF with a low cutoff frequency was applied to the temporal dataset at each voltage point to remove the background drift. As a result, the HPF utilizing cutoff frequencies between 0.001 Hz and 0.01 Hz could be effectively used to a set of FSCV data for removing the drifting patterns while preserving the temporal kinetics of the phasic dopamine response recorded in vivo. In addition, compared to a drift removal method using principal component analysis, this was found to be significantly more effective in reducing the drift (unpaired t-test p < 0.0001, t = 10.88) when applied to data collected from Tris buffer over 24 hours although a drift removal method using principal component analysis also showed the effective background drift reduction. The HPF was also applied to 5 hours of FSCV in vivo data. Electrically evoked dopamine peaks, observed in the nucleus accumbens, were clearly visible even without background subtraction. This technique provides a new, simple, and yet robust, approach to analyse FSCV data with an unstable background.
A review of machine learning in obesity.
DeGregory, K W; Kuiper, P; DeSilvio, T; Pleuss, J D; Miller, R; Roginski, J W; Fisher, C B; Harness, D; Viswanath, S; Heymsfield, S B; Dungan, I; Thomas, D M
2018-05-01
Rich sources of obesity-related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high-level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity. © 2018 World Obesity Federation.
Separation of mixtures of chemical elements in plasma
NASA Astrophysics Data System (ADS)
Dolgolenko, D. A.; Muromkin, Yu A.
2017-10-01
This paper reviews proposals on the plasma processing of radioactive waste (RW) and spent nuclear fuel (SNF). The chemical processing of SNF based on the extraction of its components from water solutions is rather expensive and produces new waste. The paper considers experimental research on plasma separation of mixtures of chemical elements and isotopes, whose results can help evaluate the plasma methods of RW and SNF reprocessing. The analysis identifies the difference between ionization levels of RW and SNF components at their transition to the plasma phase as a reason why all plasma methods are difficult to apply.
Parameters modelling of amaranth grain processing technology
NASA Astrophysics Data System (ADS)
Derkanosova, N. M.; Shelamova, S. A.; Ponomareva, I. N.; Shurshikova, G. V.; Vasilenko, O. A.
2018-03-01
The article presents a technique that allows calculating the structure of a multicomponent bakery mixture for the production of enriched products, taking into account the instability of nutrient content, and ensuring the fulfilment of technological requirements and, at the same time considering consumer preferences. The results of modelling and analysis of optimal solutions are given by the example of calculating the structure of a three-component mixture of wheat and rye flour with an enriching component, that is, whole-hulled amaranth flour applied to the technology of bread from a mixture of rye and wheat flour on a liquid leaven.
Service evaluation of aircraft composite structural components
NASA Technical Reports Server (NTRS)
Brooks, W. A., Jr.; Dow, M. B.
1973-01-01
The advantages of the use of composite materials in structural applications have been identified in numerous engineering studies. Technology development programs are underway to correct known deficiencies and to provide needed improvements. However, in the final analysis, flight service programs are necessary to develop broader acceptance of, and confidence in, any new class of materials such as composites. Such flight programs, initiated by NASA Langley Research Center, are reviewed. These programs which include the selectively reinforced metal and the all-composite concepts applied to both secondary and primary aircraft structural components, are described and current status is indicated.
NASA Astrophysics Data System (ADS)
Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.
2014-04-01
This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 year time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea-surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather- or climate-driven noise in the 10Be deposition flux during different stages of climate. The production signal varies at lower frequencies, dominated by the 11 year solar cycle within the 30 year timescale of these experiments. The climatic noise is of higher frequencies than 11 years during the 30 year period studied. We first apply empirical orthogonal function (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis to the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low-frequency components and the long-term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high-frequency components represent climate-driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Lotfy, Hayam M; Saleh, Sarah S; Hassan, Nagiba Y; Salem, Hesham
2015-01-01
Novel spectrophotometric methods were applied for the determination of the minor component tetryzoline HCl (TZH) in its ternary mixture with ofloxacin (OFX) and prednisolone acetate (PA) in the ratio of (1:5:7.5), and in its binary mixture with sodium cromoglicate (SCG) in the ratio of (1:80). The novel spectrophotometric methods determined the minor component (TZH) successfully in the two selected mixtures by computing the geometrical relationship of either standard addition or subtraction. The novel spectrophotometric methods are: geometrical amplitude modulation (GAM), geometrical induced amplitude modulation (GIAM), ratio H-point standard addition method (RHPSAM) and compensated area under the curve (CAUC). The proposed methods were successfully applied for the determination of the minor component TZH below its concentration range. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings
Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng
2016-01-01
The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063
NASA Astrophysics Data System (ADS)
Sollberger, David; Greenhalgh, Stewart A.; Schmelzbach, Cedric; Van Renterghem, Cédéric; Robertsson, Johan O. A.
2018-04-01
We provide a six-component (6-C) polarization model for P-, SV-, SH-, Rayleigh-, and Love-waves both inside an elastic medium as well as at the free surface. It is shown that single-station 6-C data comprised of three components of rotational motion and three components of translational motion provide the opportunity to unambiguously identify the wave type, propagation direction, and local P- and S-wave velocities at the receiver location by use of polarization analysis. To extract such information by conventional processing of three-component (3-C) translational data would require large and dense receiver arrays. The additional rotational components allow the extension of the rank of the coherency matrix used for polarization analysis. This enables us to accurately determine the wave type and wave parameters (propagation direction and velocity) of seismic phases, even if more than one wave is present in the analysis time window. This is not possible with standard, pure-translational 3-C recordings. In order to identify modes of vibration and to extract the accompanying wave parameters, we adapt the multiple signal classification algorithm (MUSIC). Due to the strong nonlinearity of the MUSIC estimator function, it can be used to detect the presence of specific wave types within the analysis time window at very high resolution. We show how the extracted wavefield properties can be used, in a fully automated way, to separate the wavefield into its different wave modes using only a single 6-C recording station. As an example, we apply the method to remove surface wave energy while preserving the underlying reflection signal and to suppress energy originating from undesired directions, such as side-scattered waves.
Valero, E; Sanz, J; Martínez-Castro, I
2001-06-01
Direct thermal desorption (DTD) has been used as a technique for extracting volatile components of cheese as a preliminary step to their gas chromatographic (GC) analysis. In this study, it is applied to different cheese varieties: Camembert, blue, Chaumes, and La Serena. Volatiles are also extracted using other techniques such as simultaneous distillation-extraction and dynamic headspace. Separation and identification of the cheese components are carried out by GC-mass spectrometry. Approximately 100 compounds are detected in the examined cheeses. The described results show that DTD is fast, simple, and easy to automate; requires only a small amount of sample (approximately 50 mg); and affords quantitative information about the main groups of compounds present in cheeses.
A topological multilayer model of the human body.
Barbeito, Antonio; Painho, Marco; Cabral, Pedro; O'Neill, João
2015-11-04
Geographical information systems deal with spatial databases in which topological models are described with alphanumeric information. Its graphical interfaces implement the multilayer concept and provide powerful interaction tools. In this study, we apply these concepts to the human body creating a representation that would allow an interactive, precise, and detailed anatomical study. A vector surface component of the human body is built using a three-dimensional (3-D) reconstruction methodology. This multilayer concept is implemented by associating raster components with the corresponding vector surfaces, which include neighbourhood topology enabling spatial analysis. A root mean square error of 0.18 mm validated the three-dimensional reconstruction technique of internal anatomical structures. The expansion of the identification and the development of a neighbourhood analysis function are the new tools provided in this model.
Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu
2015-01-01
It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146
Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC
López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.
2018-01-01
The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials. PMID:29401725
Virtual Laboratories to Achieve Higher-Order Learning in Fluid Mechanics
NASA Astrophysics Data System (ADS)
Ward, A. S.; Gooseff, M. N.; Toto, R.
2009-12-01
Bloom’s higher-order cognitive skills (analysis, evaluation, and synthesis) are recognized as necessary in engineering education, yet these are difficult to achieve in traditional lecture formats. Laboratory components supplement traditional lectures in an effort to emphasize active learning and provide higher-order challenges, but these laboratories are often subject to the constraints of (a) increasing student enrollment, (b) limited funding for operational, maintenance, and instructional expenses and (c) increasing demands on undergraduate student credit requirements. Here, we present results from a pilot project implementing virtual (or online) laboratory experiences as an alternative to a traditional laboratory experience in Fluid Mechanics, a required third year course. Students and faculty were surveyed to identify the topics that were most difficult, and virtual laboratory and design components developed to supplement lecture material. Each laboratory includes a traditional lab component, requiring student analysis and evaluation. The lab concludes with a design exercise, which imposes additional problem constraints and allows students to apply their laboratory observations to a real-world situation.
Detailed analysis and test correlation of a stiffened composite wing panel
NASA Technical Reports Server (NTRS)
Davis, D. Dale, Jr.
1991-01-01
Nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings supplied by the Bell Helicopter Textron Corporation, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain (ANS) elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain displacements relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis. Strain predictions from both the linear and nonlinear stress analyses are shown to compare well with experimental data up through the Design Ultimate Load (DUL) of the panel. However, due to the extreme nonlinear response of the panel, the linear analysis was not accurate at loads above the DUL. The nonlinear analysis more accurately predicted the strain at high values of applied load, and even predicted complicated nonlinear response characteristics, such as load reversals, at the observed failure load of the test panel. In order to understand the failure mechanism of the panel, buckling and first ply failure analyses were performed. The buckling load was 17 percent above the observed failure load while first ply failure analyses indicated significant material damage at and below the observed failure load.
NASA Technical Reports Server (NTRS)
Fiske, David R.
2004-01-01
In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.
Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers
NASA Astrophysics Data System (ADS)
Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu
2018-02-01
Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.
Determination of micro amounts of iron, aluminum, and alkaline earth metals in silicon carbide
NASA Technical Reports Server (NTRS)
Hirata, H.; Arai, M.
1978-01-01
A colorimetric method for analysis of micro components in silicon carbide used as the raw material for varistors is described. The microcomponents analyzed included iron soluble in hydrochloric acid, iron, aluminum, calcium and magnesium. Samples were analyzed by the method, and the results for iron and aluminum agreed well with the N.B.S. standard values and the values obtained by the other company. The method can therefore be applied to the analysis of actual samples.
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
Dynamical density functional theory analysis of the laning instability in sheared soft matter.
Scacchi, A; Archer, A J; Brader, J M
2017-12-01
Using dynamical density functional theory (DDFT) methods we investigate the laning instability of a sheared colloidal suspension. The nonequilibrium ordering at the laning transition is driven by nonaffine particle motion arising from interparticle interactions. Starting from a DDFT which incorporates the nonaffine motion, we perform a linear stability analysis that enables identification of the regions of parameter space where lanes form. We illustrate our general approach by applying it to a simple one-component fluid of soft penetrable particles.
Turbine blade tip durability analysis
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Laflen, J. H.; Spamer, G. T.
1981-01-01
An air-cooled turbine blade from an aircraft gas turbine engine chosen for its history of cracking was subjected to advanced analytical and life-prediction techniques. The utility of advanced structural analysis techniques and advanced life-prediction techniques in the life assessment of hot section components are verified. Three dimensional heat transfer and stress analyses were applied to the turbine blade mission cycle and the results were input into advanced life-prediction theories. Shortcut analytical techniques were developed. The proposed life-prediction theories are evaluated.
A transverse Kelvin-Helmholtz instability in a magnetized plasma
NASA Technical Reports Server (NTRS)
Kintner, P.; Dangelo, N.
1977-01-01
An analysis is conducted of the transverse Kelvin-Helmholtz instability in a magnetized plasma for unstable flute modes. The analysis makes use of a two-fluid model. Details regarding the instability calculation are discussed, taking into account the ion continuity and momentum equations, the solution of a zero-order and a first-order component, and the properties of the solution. It is expected that the linear calculation conducted will apply to situations in which the plasma has experienced no more than a few growth periods.
Yan, Caixia; Liu, Huihui; Sheng, Yanru; Huang, Xian; Nie, Minghua; Huang, Qi; Baalousha, Mohammed
2018-10-01
Characterization of natural colloids is the key to understand pollutant fate and transport in the environment. The present study investigates the relationship between size and fluorescence properties of colloidal organic matter (COM) from five tributaries of Poyang Lake. Colloids were size-fractionated using cross-flow ultrafiltration and their fluorescence properties were measured by three-dimensional excitation-emission matrix fluorescence spectroscopy (3D-EEM). Parallel factor analysis (PARAFAC) and/or Self-organizing map (SOM) were applied to assess fluorescence properties as proxy indicators for the different size of colloids. PARAFAC analysis identified four fluorescence components including three humic-like components (C1-C3) and a protein-like component (C4). These four fluorescence components, and in particular the protein-like component, are primarily present in <1 kDa phase. For the colloidal fractions (1-10 kDa, 10-100 kDa, and 100 kDa-0.7 μm), the majority of fluorophores are associated with the smallest size fraction. SOM analysis demonstrated that relatively high fluorescence intensity and aromaticity occur primarily in <1 kDa phase, followed by 1-10 kDa colloids. Coupling PARAFAC and SOM facilitate the visualization and interpretation of the relationship between colloidal size and fluorescence properties with fewer input variables, shorter running time, higher reliability, and nondestructive results. Fluorescence indices analysis reveals that the smallest colloidal fraction (1-10 kDa) was dominated by higher humified and less autochthonous COM. Copyright © 2018 Elsevier B.V. All rights reserved.
Q-mode versus R-mode principal component analysis for linear discriminant analysis (LDA)
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2017-05-01
Many literature apply Principal Component Analysis (PCA) as either preliminary visualization or variable con-struction methods or both. Focus of PCA can be on the samples (R-mode PCA) or variables (Q-mode PCA). Traditionally, R-mode PCA has been the usual approach to reduce high-dimensionality data before the application of Linear Discriminant Analysis (LDA), to solve classification problems. Output from PCA composed of two new matrices known as loadings and scores matrices. Each matrix can then be used to produce a plot, i.e. loadings plot aids identification of important variables whereas scores plot presents spatial distribution of samples on new axes that are also known as Principal Components (PCs). Fundamentally, the scores matrix always be the input variables for building classification model. A recent paper uses Q-mode PCA but the focus of analysis was not on the variables but instead on the samples. As a result, the authors have exchanged the use of both loadings and scores plots in which clustering of samples was studied using loadings plot whereas scores plot has been used to identify important manifest variables. Therefore, the aim of this study is to statistically validate the proposed practice. Evaluation is based on performance of external error obtained from LDA models according to number of PCs. On top of that, bootstrapping was also conducted to evaluate the external error of each of the LDA models. Results show that LDA models produced by PCs from R-mode PCA give logical performance and the matched external error are also unbiased whereas the ones produced with Q-mode PCA show the opposites. With that, we concluded that PCs produced from Q-mode is not statistically stable and thus should not be applied to problems of classifying samples, but variables. We hope this paper will provide some insights on the disputable issues.
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang
2014-05-01
Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.
Application of blind source separation to real-time dissolution dynamic nuclear polarization.
Hilty, Christian; Ragavan, Mukundan
2015-01-20
The use of a blind source separation (BSS) algorithm is demonstrated for the analysis of time series of nuclear magnetic resonance (NMR) spectra. This type of data is obtained commonly from experiments, where analytes are hyperpolarized using dissolution dynamic nuclear polarization (D-DNP), both in in vivo and in vitro contexts. High signal gains in D-DNP enable rapid measurement of data sets characterizing the time evolution of chemical or metabolic processes. BSS is based on an algorithm that can be applied to separate the different components contributing to the NMR signal and determine the time dependence of the signals from these components. This algorithm requires minimal prior knowledge of the data, notably, no reference spectra need to be provided, and can therefore be applied rapidly. In a time-resolved measurement of the enzymatic conversion of hyperpolarized oxaloacetate to malate, the two signal components are separated into computed source spectra that closely resemble the spectra of the individual compounds. An improvement in the signal-to-noise ratio of the computed source spectra is found compared to the original spectra, presumably resulting from the presence of each signal more than once in the time series. The reconstruction of the original spectra yields the time evolution of the contributions from the two sources, which also corresponds closely to the time evolution of integrated signal intensities from the original spectra. BSS may therefore be an approach for the efficient identification of components and estimation of kinetics in D-DNP experiments, which can be applied at a high level of automation.
Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Dugardeyn, Jasper; Van Der Straeten, Dominique; Xu, Guowang; Sandra, Pat
2011-05-27
In this paper, automated sample preparation, retention time locked gas chromatography-mass spectrometry (GC-MS) and data analysis methods for the metabolomics study were evaluated. A miniaturized and automated derivatisation method using sequential oximation and silylation was applied to a polar extract of 4 types (2 types×2 ages) of Arabidopsis thaliana, a popular model organism often used in plant sciences and genetics. Automation of the derivatisation process offers excellent repeatability, and the time between sample preparation and analysis was short and constant, reducing artifact formation. Retention time locked (RTL) gas chromatography-mass spectrometry was used, resulting in reproducible retention times and GC-MS profiles. Two approaches were used for data analysis. XCMS followed by principal component analysis (approach 1) and AMDIS deconvolution combined with a commercially available program (Mass Profiler Professional) followed by principal component analysis (approach 2) were compared. Several features that were up- or down-regulated in the different types were detected. Copyright © 2011 Elsevier B.V. All rights reserved.