EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES.
Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D
2008-05-12
This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component's discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies.
EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES
Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D.
2009-01-01
This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component’s discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies. PMID:20582334
Jesse, Stephen; Kalinin, Sergei V
2009-02-25
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.
2018-01-01
Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Order-crossing removal in Gabor order tracking by independent component analysis
NASA Astrophysics Data System (ADS)
Guo, Yu; Tan, Kok Kiong
2009-08-01
Order-crossing problems in Gabor order tracking (GOT) of rotating machinery often occur when noise due to power-frequency interference, local structure resonance, etc., is prominent in applications. They can render the analysis results and the waveform-reconstruction tasks in GOT inaccurate or even meaningless. An approach is proposed in this paper to address the order-crossing problem by independent component analysis (ICA). With the approach, accurate order analysis results can be obtained and the waveforms of the order components of interest can be reconstructed or extracted from the recorded noisy data series. In addition, the ambiguities (permutation and scaling) of ICA results are also solved with the approach. The approach is amenable to applications in condition monitoring and fault diagnosis of rotating machinery. The evaluation of the approach is presented in detail based on simulations and an experiment on a rotor test rig. The results obtained using the proposed approach are compared with those obtained using the standard GOT. The comparison shows that the presented approach is more effective to solve order-crossing problems in GOT.
Semi-blind Bayesian inference of CMB map and power spectrum
NASA Astrophysics Data System (ADS)
Vansyngel, Flavien; Wandelt, Benjamin D.; Cardoso, Jean-François; Benabed, Karim
2016-04-01
We present a new blind formulation of the cosmic microwave background (CMB) inference problem. The approach relies on a phenomenological model of the multifrequency microwave sky without the need for physical models of the individual components. For all-sky and high resolution data, it unifies parts of the analysis that had previously been treated separately such as component separation and power spectrum inference. We describe an efficient sampling scheme that fully explores the component separation uncertainties on the inferred CMB products such as maps and/or power spectra. External information about individual components can be incorporated as a prior giving a flexible way to progressively and continuously introduce physical component separation from a maximally blind approach. We connect our Bayesian formalism to existing approaches such as Commander, spectral mismatch independent component analysis (SMICA), and internal linear combination (ILC), and discuss possible future extensions.
Sun, Hui; Wang, Huiyu; Zhang, Aihua; Yan, Guangli; Han, Ying; Li, Yuan; Wu, Xiuhong; Meng, Xiangcai; Wang, Xijun
2016-01-01
As herbal medicines have an important position in health care systems worldwide, their current assessment, and quality control are a major bottleneck. Cortex Phellodendri chinensis (CPC) and Cortex Phellodendri amurensis (CPA) are widely used in China, however, how to identify species of CPA and CPC has become urgent. In this study, multivariate analysis approach was performed to the investigation of chemical discrimination of CPA and CPC. Principal component analysis showed that two herbs could be separated clearly. The chemical markers such as berberine, palmatine, phellodendrine, magnoflorine, obacunone, and obaculactone were identified through the orthogonal partial least squared discriminant analysis, and were identified tentatively by the accurate mass of quadruple-time-of-flight mass spectrometry. A total of 29 components can be used as the chemical markers for discrimination of CPA and CPC. Of them, phellodenrine is significantly higher in CPC than that of CPA, whereas obacunone and obaculactone are significantly higher in CPA than that of CPC. The present study proves that multivariate analysis approach based chemical analysis greatly contributes to the investigation of CPA and CPC, and showed that the identified chemical markers as a whole should be used to discriminate the two herbal medicines, and simultaneously the results also provided chemical information for their quality assessment. Multivariate analysis approach was performed to the investigate the herbal medicineThe chemical markers were identified through multivariate analysis approachA total of 29 components can be used as the chemical markers. UPLC-Q/TOF-MS-based multivariate analysis method for the herbal medicine samples Abbreviations used: CPC: Cortex Phellodendri chinensis, CPA: Cortex Phellodendri amurensis, PCA: Principal component analysis, OPLS-DA: Orthogonal partial least squares discriminant analysis, BPI: Base peaks ion intensity.
Respiratory protective device design using control system techniques
NASA Technical Reports Server (NTRS)
Burgess, W. A.; Yankovich, D.
1972-01-01
The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.
Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y
1992-01-01
An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.
Mixture modelling for cluster analysis.
McLachlan, G J; Chang, S U
2004-10-01
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Generalized Structured Component Analysis with Latent Interactions
ERIC Educational Resources Information Center
Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan
2010-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Lattice Independent Component Analysis for Mobile Robot Localization
NASA Astrophysics Data System (ADS)
Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz
This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).
A Comparison of Component and Factor Patterns: A Monte Carlo Approach.
ERIC Educational Resources Information Center
Velicer, Wayne F.; And Others
1982-01-01
Factor analysis, image analysis, and principal component analysis are compared with respect to the factor patterns they would produce under various conditions. The general conclusion that is reached is that the three methods produce results that are equivalent. (Author/JKS)
Fetal ECG extraction using independent component analysis by Jade approach
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian
2017-11-01
Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
Yang, Guang; Zhao, Xin; Wen, Jun; Zhou, Tingting; Fan, Guorong
2017-04-01
An analytical approach including fingerprint, quantitative analysis and rapid screening of anti-oxidative components was established and successfully applied for the comprehensive quality control of Rhizoma Smilacis Glabrae (RSG), a well-known Traditional Chinese Medicine with the homology of medicine and food. Thirteen components were tentatively identified based on their retention behavior, UV absorption and MS fragmentation patterns. Chemometric analysis based on coulmetric array data was performed to evaluate the similarity and variation between fifteen batches. Eight discriminating components were quantified using single-compound calibration. The unit responses of those components in coulmetric array detection were calculated and compared with those of several compounds reported to possess antioxidant activity, and four of them were tentatively identified as main contributors to the total anti-oxidative activity. The main advantage of the proposed approach was that it realized simultaneous fingerprint, quantitative analysis and screening of anti-oxidative components, providing comprehensive information for quality assessment of RSG. Copyright © 2017 Elsevier B.V. All rights reserved.
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1998-05-01
Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.
Principal Component Clustering Approach to Teaching Quality Discriminant Analysis
ERIC Educational Resources Information Center
Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan
2016-01-01
Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…
Virtual directions in paleomagnetism: A global and rapid approach to evaluate the NRM components.
NASA Astrophysics Data System (ADS)
Ramón, Maria J.; Pueyo, Emilio L.; Oliva-Urcia, Belén; Larrasoaña, Juan C.
2017-02-01
We introduce a method and software to process demagnetization data for a rapid and integrative estimation of characteristic remanent magnetization (ChRM) components. The virtual directions (VIDI) of a paleomagnetic site are “all” possible directions that can be calculated from a given demagnetization routine of “n” steps (being m the number of specimens in the site). If the ChRM can be defined for a site, it will be represented in the VIDI set. Directions can be calculated for successive steps using principal component analysis, both anchored to the origin (resultant virtual directions RVD; m * (n2+n)/2) and not anchored (difference virtual directions DVD; m * (n2-n)/2). The number of directions per specimen (n2) is very large and will enhance all ChRM components with noisy regions where two components were fitted together (mixing their unblocking intervals). In the same way, resultant and difference virtual circles (RVC, DVC) are calculated. Virtual directions and circles are a global and objective approach to unravel different natural remanent magnetization (NRM) components for a paleomagnetic site without any assumption. To better constrain the stable components, some filters can be applied, such as establishing an upper boundary to the MAD, removing samples with anomalous intensities, or stating a minimum number of demagnetization steps (objective filters) or selecting a given unblocking interval (subjective but based on the expertise). On the other hand, the VPD program also allows the application of standard approaches (classic PCA fitting of directions a circles) and other ancillary methods (stacking routine, linearity spectrum analysis) giving an objective, global and robust idea of the demagnetization structure with minimal assumptions. Application of the VIDI method to natural cases (outcrops in the Pyrenees and u-channel data from a Roman dam infill in northern Spain) and their comparison to other approaches (classic end-point, demagnetization circle analysis, stacking routine and linearity spectrum analysis) allows validation of this technique. The VIDI is a global approach and it is especially useful for large data sets and rapid estimation of the NRM components.
Kellogg, Joshua J; Todd, Daniel A; Egan, Joseph M; Raja, Huzefa A; Oberlies, Nicholas H; Kvalheim, Olav M; Cech, Nadja B
2016-02-26
A central challenge of natural products research is assigning bioactive compounds from complex mixtures. The gold standard approach to address this challenge, bioassay-guided fractionation, is often biased toward abundant, rather than bioactive, mixture components. This study evaluated the combination of bioassay-guided fractionation with untargeted metabolite profiling to improve active component identification early in the fractionation process. Key to this methodology was statistical modeling of the integrated biological and chemical data sets (biochemometric analysis). Three data analysis approaches for biochemometric analysis were compared, namely, partial least-squares loading vectors, S-plots, and the selectivity ratio. Extracts from the endophytic fungi Alternaria sp. and Pyrenochaeta sp. with antimicrobial activity against Staphylococcus aureus served as test cases. Biochemometric analysis incorporating the selectivity ratio performed best in identifying bioactive ions from these extracts early in the fractionation process, yielding altersetin (3, MIC 0.23 μg/mL) and macrosphelide A (4, MIC 75 μg/mL) as antibacterial constituents from Alternaria sp. and Pyrenochaeta sp., respectively. This study demonstrates the potential of biochemometrics coupled with bioassay-guided fractionation to identify bioactive mixture components. A benefit of this approach is the ability to integrate multiple stages of fractionation and bioassay data into a single analysis.
Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro
2016-01-01
The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.
Exploring Two Approaches for an End-to-End Scientific Analysis Workflow
NASA Astrophysics Data System (ADS)
Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; Paterno, Marc; Sehrish, Saba
2015-12-01
The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally, it is important for scientists to be able to share their workflows with collaborators. We have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC); the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In this paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.
Stirling engine - Approach for long-term durability assessment
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Bartolotta, Paul A.; Halford, Gary R.; Freed, Alan D.
1992-01-01
The approach employed by NASA Lewis for the long-term durability assessment of the Stirling engine hot-section components is summarized. The approach consists of: preliminary structural assessment; development of a viscoplastic constitutive model to accurately determine material behavior under high-temperature thermomechanical loads; an experimental program to characterize material constants for the viscoplastic constitutive model; finite-element thermal analysis and structural analysis using a viscoplastic constitutive model to obtain stress/strain/temperature at the critical location of the hot-section components for life assessment; and development of a life prediction model applicable for long-term durability assessment at high temperatures. The approach should aid in the provision of long-term structural durability and reliability of Stirling engines.
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.
2009-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D
2008-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.
Decoupled 1D/3D analysis of a hydraulic valve
NASA Astrophysics Data System (ADS)
Mehring, Carsten; Zopeya, Ashok; Latham, Matt; Ihde, Thomas; Massie, Dan
2014-10-01
Analysis approaches during product development of fluid valves and other aircraft fluid delivery components vary greatly depending on the development stage. Traditionally, empirical or simplistic one-dimensional tools are being deployed during preliminary design, whereas detailed analysis such as CFD (Computational Fluid Dynamics) tools are used to refine a selected design during the detailed design stage. In recent years, combined 1D/3D co-simulation has been deployed specifically for system level simulations requiring an increased level of analysis detail for one or more components. The present paper presents a decoupled 1D/3D analysis approach where 3D CFD analysis results are utilized to enhance the fidelity of a dynamic 1D modelin context of an aircraft fuel valve.
Face recognition using an enhanced independent component analysis approach.
Kwak, Keun-Chang; Pedrycz, Witold
2007-03-01
This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.
NASA Astrophysics Data System (ADS)
Luce, R.; Hildebrandt, P.; Kuhlmann, U.; Liesen, J.
2016-09-01
The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for non-negative matrix factorization which is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed.
SCADA alarms processing for wind turbine component failure detection
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Reder, M.; Melero, J. J.
2016-09-01
Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.
Maneshi, Mona; Vahdat, Shahabeddin; Gotman, Jean; Grova, Christophe
2016-01-01
Independent component analysis (ICA) has been widely used to study functional magnetic resonance imaging (fMRI) connectivity. However, the application of ICA in multi-group designs is not straightforward. We have recently developed a new method named “shared and specific independent component analysis” (SSICA) to perform between-group comparisons in the ICA framework. SSICA is sensitive to extract those components which represent a significant difference in functional connectivity between groups or conditions, i.e., components that could be considered “specific” for a group or condition. Here, we investigated the performance of SSICA on realistic simulations, and task fMRI data and compared the results with one of the state-of-the-art group ICA approaches to infer between-group differences. We examined SSICA robustness with respect to the number of allowable extracted specific components and between-group orthogonality assumptions. Furthermore, we proposed a modified formulation of the back-reconstruction method to generate group-level t-statistics maps based on SSICA results. We also evaluated the consistency and specificity of the extracted specific components by SSICA. The results on realistic simulated and real fMRI data showed that SSICA outperforms the regular group ICA approach in terms of reconstruction and classification performance. We demonstrated that SSICA is a powerful data-driven approach to detect patterns of differences in functional connectivity across groups/conditions, particularly in model-free designs such as resting-state fMRI. Our findings in task fMRI show that SSICA confirms results of the general linear model (GLM) analysis and when combined with clustering analysis, it complements GLM findings by providing additional information regarding the reliability and specificity of networks. PMID:27729843
Exploring Two Approaches for an End-to-End Scientific Analysis Workflow
Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; ...
2015-12-23
The advance of the scientific discovery process is accomplished by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally,more » it is important for scientists to be able to share their workflows with collaborators. Moreover we have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC), the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In our paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.« less
NASA Technical Reports Server (NTRS)
Sopher, R.; Hallock, D. W.
1985-01-01
A time history analysis for rotorcraft dynamics based on dynamical substructures, and nonstructural mathematical and aerodynamic components is described. The analysis is applied to predict helicopter ground resonance and response to rotor damage. Other applications illustrate the stability and steady vibratory response of stopped and gimballed rotors, representative of new technology. Desirable attributes expected from modern codes are realized, although the analysis does not employ a complete set of techniques identified for advanced software. The analysis is able to handle a comprehensive set of steady state and stability problems with a small library of components.
Ocké, Marga C
2013-05-01
This paper aims to describe different approaches for studying the overall diet with advantages and limitations. Studies of the overall diet have emerged because the relationship between dietary intake and health is very complex with all kinds of interactions. These cannot be captured well by studying single dietary components. Three main approaches to study the overall diet can be distinguished. The first method is researcher-defined scores or indices of diet quality. These are usually based on guidelines for a healthy diet or on diets known to be healthy. The second approach, using principal component or cluster analysis, is driven by the underlying dietary data. In principal component analysis, scales are derived based on the underlying relationships between food groups, whereas in cluster analysis, subgroups of the population are created with people that cluster together based on their dietary intake. A third approach includes methods that are driven by a combination of biological pathways and the underlying dietary data. Reduced rank regression defines linear combinations of food intakes that maximally explain nutrient intakes or intermediate markers of disease. Decision tree analysis identifies subgroups of a population whose members share dietary characteristics that influence (intermediate markers of) disease. It is concluded that all approaches have advantages and limitations and essentially answer different questions. The third approach is still more in an exploration phase, but seems to have great potential with complementary value. More insight into the utility of conducting studies on the overall diet can be gained if more attention is given to methodological issues.
ERIC Educational Resources Information Center
Kucan, Linda; Palincsar, Annemarie Sullivan
2018-01-01
This investigation focuses on a tool used in a reading methods course to introduce reading specialist candidates to text analysis as a critical component of planning for text-based discussions. Unlike planning that focuses mainly on important text content or information, a text analysis approach focuses both on content and how that content is…
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-11-04
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk
2014-01-01
Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here wemore » present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.« less
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
A System for Sentiment Analysis of Colloquial Arabic Using Human Computation
Al-Subaihin, Afnan S.; Al-Khalifa, Hend S.
2014-01-01
We present the implementation and evaluation of a sentiment analysis system that is conducted over Arabic text with evaluative content. Our system is broken into two different components. The first component is a game that enables users to annotate large corpuses of text in a fun manner. The game produces necessary linguistic resources that will be used by the second component which is the sentimental analyzer. Two different algorithms have been designed to employ these linguistic resources to analyze text and classify it according to its sentimental polarity. The first approach is using sentimental tag patterns, which reached a precision level of 56.14%. The second approach is the sentimental majority approach which relies on calculating the number of negative and positive phrases in the sentence and classifying the sentence according to the dominant polarity. The results after evaluating the system for the first sentimental majority approach yielded the highest accuracy level reached by our system which is 60.5% while the second variation scored an accuracy of 60.32%. PMID:24892064
Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo
2016-07-12
Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
Likić, Vladimir A
2009-01-01
Gas chromatography-mass spectrometry (GC-MS) is a widely used analytical technique for the identification and quantification of trace chemicals in complex mixtures. When complex samples are analyzed by GC-MS it is common to observe co-elution of two or more components, resulting in an overlap of signal peaks observed in the total ion chromatogram. In such situations manual signal analysis is often the most reliable means for the extraction of pure component signals; however, a systematic manual analysis over a number of samples is both tedious and prone to error. In the past 30 years a number of computational approaches were proposed to assist in the process of the extraction of pure signals from co-eluting GC-MS components. This includes empirical methods, comparison with library spectra, eigenvalue analysis, regression and others. However, to date no approach has been recognized as best, nor accepted as standard. This situation hampers general GC-MS capabilities, and in particular has implications for the development of robust, high-throughput GC-MS analytical protocols required in metabolic profiling and biomarker discovery. Here we first discuss the nature of GC-MS data, and then review some of the approaches proposed for the extraction of pure signals from co-eluting components. We summarize and classify different approaches to this problem, and examine why so many approaches proposed in the past have failed to live up to their full promise. Finally, we give some thoughts on the future developments in this field, and suggest that the progress in general computing capabilities attained in the past two decades has opened new horizons for tackling this important problem. PMID:19818154
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, G. F.; Banerjee, P. K.; Shi, Y.
1990-01-01
A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.
Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...
NASA Astrophysics Data System (ADS)
Kim, Jonghoon; Cho, B. H.
2014-08-01
This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.
Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan
2016-02-01
This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Genome-wide selection components analysis in a fish with male pregnancy.
Flanagan, Sarah P; Jones, Adam G
2017-04-01
A major goal of evolutionary biology is to identify the genome-level targets of natural and sexual selection. With the advent of next-generation sequencing, whole-genome selection components analysis provides a promising avenue in the search for loci affected by selection in nature. Here, we implement a genome-wide selection components analysis in the sex role reversed Gulf pipefish, Syngnathus scovelli. Our approach involves a double-digest restriction-site associated DNA sequencing (ddRAD-seq) technique, applied to adult females, nonpregnant males, pregnant males, and their offspring. An F ST comparison of allele frequencies among these groups reveals 47 genomic regions putatively experiencing sexual selection, as well as 468 regions showing a signature of differential viability selection between males and females. A complementary likelihood ratio test identifies similar patterns in the data as the F ST analysis. Sexual selection and viability selection both tend to favor the rare alleles in the population. Ultimately, we conclude that genome-wide selection components analysis can be a useful tool to complement other approaches in the effort to pinpoint genome-level targets of selection in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.
NASA Technical Reports Server (NTRS)
Klein, L. R.
1974-01-01
The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.
New methodologies for multi-scale time-variant reliability analysis of complex lifeline networks
NASA Astrophysics Data System (ADS)
Kurtz, Nolan Scot
The cost of maintaining existing civil infrastructure is enormous. Since the livelihood of the public depends on such infrastructure, its state must be managed appropriately using quantitative approaches. Practitioners must consider not only which components are most fragile to hazard, e.g. seismicity, storm surge, hurricane winds, etc., but also how they participate on a network level using network analysis. Focusing on particularly damaged components does not necessarily increase network functionality, which is most important to the people that depend on such infrastructure. Several network analyses, e.g. S-RDA, LP-bounds, and crude-MCS, and performance metrics, e.g. disconnection bounds and component importance, are available for such purposes. Since these networks are existing, the time state is also important. If networks are close to chloride sources, deterioration may be a major issue. Information from field inspections may also have large impacts on quantitative models. To address such issues, hazard risk analysis methodologies for deteriorating networks subjected to seismicity, i.e. earthquakes, have been created from analytics. A bridge component model has been constructed for these methodologies. The bridge fragilities, which were constructed from data, required a deeper level of analysis as these were relevant for specific structures. Furthermore, chloride-induced deterioration network effects were investigated. Depending on how mathematical models incorporate new information, many approaches are available, such as Bayesian model updating. To make such procedures more flexible, an adaptive importance sampling scheme was created for structural reliability problems. Additionally, such a method handles many kinds of system and component problems with singular or multiple important regions of the limit state function. These and previously developed analysis methodologies were found to be strongly sensitive to the network size. Special network topologies may be more or less computationally difficult, while the resolution of the network also has large affects. To take advantage of some types of topologies, network hierarchical structures with super-link representation have been used in the literature to increase the computational efficiency by analyzing smaller, densely connected networks; however, such structures were based on user input and subjective at times. To address this, algorithms must be automated and reliable. These hierarchical structures may indicate the structure of the network itself. This risk analysis methodology has been expanded to larger networks using such automated hierarchical structures. Component importance is the most important objective from such network analysis; however, this may only provide the information of which bridges to inspect/repair earliest and little else. High correlations influence such component importance measures in a negative manner. Additionally, a regional approach is not appropriately modelled. To investigate a more regional view, group importance measures based on hierarchical structures have been created. Such structures may also be used to create regional inspection/repair approaches. Using these analytical, quantitative risk approaches, the next generation of decision makers may make both component and regional-based optimal decisions using information from both network function and further effects of infrastructure deterioration.
Wang, Nani; Zhao, Guizhi; Zhang, Yang; Wang, Xuping; Zhao, Lisha; Xu, Pingcui; Shou, Dan
2017-10-27
BACKGROUND Osteoporosis is a complex bone disorder with a genetic predisposition, and is a cause of health problems worldwide. In China, Curculigo orchioides (CO) has been widely used as a herbal medicine in the prevention and treatment of osteoporosis. However, research on the mechanism of action of CO is still lacking. The aim of this study was to identify the absorbable components, potential targets, and associated treatment pathways of CO using a network pharmacology approach. MATERIAL AND METHODS We explored the chemical components of CO and used the five main principles of drug absorption to identify absorbable components. Targets for the therapeutic actions of CO were obtained from the PharmMapper server database. Pathway enrichment analysis was performed using the Comparative Toxicogenomics Database (CTD). Cytoscape was used to visualize the multiple components-multiple target-multiple pathways-multiple disease network for CO. RESULTS We identified 77 chemical components of CO, of which 32 components could be absorbed in the blood. These potential active components of CO regulated 83 targets and affected 58 pathways. Data analysis showed that the genes for estrogen receptor alpha (ESR1) and beta (ESR2), and the gene for 11 beta-hydroxysteroid dehydrogenase type 1, or cortisone reductase (HSD11B1) were the main targets of CO. Endocrine regulatory factors and factors regulating calcium reabsorption, steroid hormone biosynthesis, and metabolic pathways were related to these main targets and to ten corresponding compounds. CONCLUSIONS The network pharmacology approach used in our study has attempted to explain the mechanisms for the effects of CO in the prevention and treatment of osteoporosis, and provides an alternative approach to the investigation of the effects of this complex compound.
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components
Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah
2017-01-01
Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627
Carbon-carbon primary structure for SSTO vehicles
NASA Astrophysics Data System (ADS)
Croop, Harold C.; Lowndes, Holland B.
1997-01-01
A hot structures development program is nearing completion to validate use of carbon-carbon composite structure for primary load carrying members in a single-stage-to-orbit, or SSTO, vehicle. A four phase program was pursued which involved design development and fabrication of a full-scale wing torque box demonstration component. The design development included vehicle and component selection, design criteria and approach, design data development, demonstration component design and analysis, test fixture design and analysis, demonstration component test planning, and high temperature test instrumentation development. The fabrication effort encompassed fabrication of structural elements for mechanical property verification as well as fabrication of the demonstration component itself and associated test fixturing. The demonstration component features 3D woven graphite preforms, integral spars, oxidation inhibited matrix, chemical vapor deposited (CVD) SiC oxidation protection coating, and ceramic matrix composite fasteners. The demonstration component has been delivered to the United States Air Force (USAF) for testing in the Wright Laboratory Structural Test Facility, WPAFB, OH. Multiple thermal-mechanical load cycles will be applied simulating two atmospheric cruise missions and one orbital mission. This paper discusses the overall approach to validation testing of the wing box component and presents some preliminary analytical test predictions.
Law, Emily F.; Beals-Erickson, Sarah E.; Fisher, Emma; Lang, Emily A.; Palermo, Tonya M.
2017-01-01
Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache. PMID:29503787
Law, Emily F; Beals-Erickson, Sarah E; Fisher, Emma; Lang, Emily A; Palermo, Tonya M
2017-01-01
Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache.
Luce, Robert; Hildebrandt, Peter; Kuhlmann, Uwe; Liesen, Jörg
2016-09-01
The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for nonnegative matrix factorization that is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with the vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed. © The Author(s) 2016.
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D
2013-07-03
Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Zadpoor, Amir A; Weinans, Harrie
2015-03-18
Patient-specific analysis of bones is considered an important tool for diagnosis and treatment of skeletal diseases and for clinical research aimed at understanding the etiology of skeletal diseases and the effects of different types of treatment on their progress. In this article, we discuss how integration of several important components enables accurate and cost-effective patient-specific bone analysis, focusing primarily on patient-specific finite element (FE) modeling of bones. First, the different components are briefly reviewed. Then, two important aspects of patient-specific FE modeling, namely integration of modeling components and automation of modeling approaches, are discussed. We conclude with a section on validation of patient-specific modeling results, possible applications of patient-specific modeling procedures, current limitations of the modeling approaches, and possible areas for future research. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gürgen, Fikret; Gürgen, Nurgül
2003-01-01
This study proposes an intelligent data analysis approach to investigate and interpret the distinctive factors of diabetes mellitus patients with and without ischemic (non-embolic type) stroke in a small population. The database consists of a total of 16 features collected from 44 diabetic patients. Features include age, gender, duration of diabetes, cholesterol, high density lipoprotein, triglyceride levels, neuropathy, nephropathy, retinopathy, peripheral vascular disease, myocardial infarction rate, glucose level, medication and blood pressure. Metric and non-metric features are distinguished. First, the mean and covariance of the data are estimated and the correlated components are observed. Second, major components are extracted by principal component analysis. Finally, as common examples of local and global classification approach, a k-nearest neighbor and a high-degree polynomial classifier such as multilayer perceptron are employed for classification with all the components and major components case. Macrovascular changes emerged as the principal distinctive factors of ischemic-stroke in diabetes mellitus. Microvascular changes were generally ineffective discriminators. Recommendations were made according to the rules of evidence-based medicine. Briefly, this case study, based on a small population, supports theories of stroke in diabetes mellitus patients and also concludes that the use of intelligent data analysis improves personalized preventive intervention. PMID:12685939
Principal Components Analysis of a JWST NIRSpec Detector Subsystem
NASA Technical Reports Server (NTRS)
Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting;
2013-01-01
We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements
NASA Astrophysics Data System (ADS)
Gungor, Ayse Gul; Nural, Ozan Ekin; Ertunc, Ozgur
2017-11-01
Purpose of this study is to analyze the direct numerical simulation data of a turbulent boundary layer subjected to strong adverse pressure gradient through anisotropy invariant mapping. RANS simulation using the ``Elliptic Blending Model'' of Manceau and Hanjolic (2002) is also conducted for the same flow case with commercial software Star-CCM+ and comparison of the results with DNS data is done. RANS simulation captures the general trends in the velocity field but, significant deviations are found when skin friction coefficients are compared. Anisotropy invariant map of Lumley and Newman (1977) and barycentric map of Banerjee et al. (2007) are used for the analysis. Invariant mapping of the DNS data has yielded that at locations away from the wall, flow is close to one component turbulence state. In the vicinity of the wall, turbulence is at two component limit which is one border of the barycentric map and as the flow evolves along the streamwise direction, it approaches to two component turbulence state. Additionally, at the locations away from the wall, turbulence approaches to two component limit. Furthermore, analysis of the invariants of the RANS simulations shows dissimilar results. In RANS simulations invariants do not approach to any of the limit states unlike the DNS.
Interdisciplinary education approach to the human science
NASA Astrophysics Data System (ADS)
Szu, Harold; Zheng, Yufeng; Zhang, Nian
2012-06-01
We introduced human sciences as components, and integrated them together as an interdisciplinary endeavor over decades. This year, we built a website to maintain systematically the educational research service. We captured the human sciences in various components in the SPIE proceedings over the last decades, which included: (i) ears & eyes like adaptive wavelets, (ii) brain-like unsupervised learning independent component analysis (ICA); (iii) compressive sampling spatiotemporal sparse information processing, (iv) nanoengineering approach to sensing components, (v) systems biology measurements, and (vi) biomedical wellness applications. In order to serve the interdisciplinary community better, our system approach is based on that the former recipients invited the next recipients to deliver their review talks and panel discussions. Since only the former recipients of each component can lead the nomination committees and make the final selections, we also create a leadership award which may be nominated by any conference attendance, to be approved by the conference organization committee.
Yamamoto, Shinya; Bamba, Takeshi; Sano, Atsushi; Kodama, Yukako; Imamura, Miho; Obata, Akio; Fukusaki, Eiichiro
2012-08-01
Soy sauces, produced from different ingredients and brewing processes, have variations in components and quality. Therefore, it is extremely important to comprehend the relationship between components and the sensory attributes of soy sauces. The current study sought to perform metabolite profiling in order to devise a method of assessing the attributes of soy sauces. Quantitative descriptive analysis (QDA) data for 24 soy sauce samples were obtained from well selected sensory panelists. Metabolite profiles primarily concerning low-molecular-weight hydrophilic components were based on gas chromatography with time-of-flightmass spectrometry (GC/TOFMS). QDA data for soy sauces were accurately predicted by projection to latent structure (PLS), with metabolite profiles serving as explanatory variables and QDA data set serving as a response variable. Moreover, analysis of correlation between matrices of metabolite profiles and QDA data indicated contributing compounds that were highly correlated with QDA data. Especially, it was indicated that sugars are important components of the tastes of soy sauces. This new approach which combines metabolite profiling with QDA is applicable to analysis of sensory attributes of food as a result of the complex interaction between its components. This approach is effective to search important compounds that contribute to the attributes. Copyright © 2012 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Cool, Steve; Victor, Jan; De Baets, Thierry
2006-12-01
Fifty unicompartmental knee arthroplasties (UKAs) were performed through a minimally invasive approach and were reviewed with an average follow-up of 3.7 years. This technique leads to reduced access to surgical landmarks. The purpose of this study was to evaluate whether correct component positioning is possible through this less invasive approach. Component positioning, femorotibial alignment and early outcomes were evaluated. We observed perfect tibial component position, but femoral component position was less consistent, especially in the sagittal plane. Femorotibial alignment in the coronal plane was within 2.5 degrees of the desired axis for 80% of the cases. Femoral component position in the sagittal plane was within a 10 degrees range of the ideal for 70% of the cases. The mean IKS Knee Function Score and Knee Score were 89/100 and 91/100 respectively. We observed two polyethylene dislocations, and one revision was performed for progressive patellofemoral arthrosis. According to our data, minimally invasive UKA does not conflict with component positioning although a learning curve needs to be respected, with femoral component positioning as the major obstacle.
Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch
2014-03-01
Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
ERIC Educational Resources Information Center
Kim, Minkyung; Crossley, Scott A.; Kyle, Kristopher
2018-01-01
This study conceptualizes lexical sophistication as a multidimensional phenomenon by reducing numerous lexical features of lexical sophistication into 12 aggregated components (i.e., dimensions) via a principal component analysis approach. These components were then used to predict second language (L2) writing proficiency levels, holistic lexical…
Fracture and Failure at and Near Interfaces Under Pressure
1998-06-18
realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
The promise of the state space approach to time series analysis for nursing research.
Levy, Janet A; Elser, Heather E; Knobel, Robin B
2012-01-01
Nursing research, particularly related to physiological development, often depends on the collection of time series data. The state space approach to time series analysis has great potential to answer exploratory questions relevant to physiological development but has not been used extensively in nursing. The aim of the study was to introduce the state space approach to time series analysis and demonstrate potential applicability to neonatal monitoring and physiology. We present a set of univariate state space models; each one describing a process that generates a variable of interest over time. Each model is presented algebraically and a realization of the process is presented graphically from simulated data. This is followed by a discussion of how the model has been or may be used in two nursing projects on neonatal physiological development. The defining feature of the state space approach is the decomposition of the series into components that are functions of time; specifically, slowly varying level, faster varying periodic, and irregular components. State space models potentially simulate developmental processes where a phenomenon emerges and disappears before stabilizing, where the periodic component may become more regular with time, or where the developmental trajectory of a phenomenon is irregular. The ultimate contribution of this approach to nursing science will require close collaboration and cross-disciplinary education between nurses and statisticians.
Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G
2012-01-01
Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
Analysis of Turbulent Boundary-Layer over Rough Surfaces with Application to Projectile Aerodynamics
1988-12-01
12 V. APPLICATION IN COMPONENT BUILD-UP METHODOLOGIES ....................... 12 1. COMPONENT BUILD-UP IN DRAG...dimensional roughness. II. CLASSIFICATION OF PREDICTION METHODS Prediction methods can be classified into two main approache-: 1) Correlation methodologies ...data are availaNe. V. APPLICATION IN COMPONENT BUILD-UP METHODOLOGIES 1. COMPONENT BUILD-UP IN DRAG The new correlation can be used for an engine.ring
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
The method of trend analysis of parameters time series of gas-turbine engine state
NASA Astrophysics Data System (ADS)
Hvozdeva, I.; Myrhorod, V.; Derenh, Y.
2017-10-01
This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
Dynamic analysis for shuttle design verification
NASA Technical Reports Server (NTRS)
Fralich, R. W.; Green, C. E.; Rheinfurth, M. H.
1972-01-01
Two approaches that are used for determining the modes and frequencies of space shuttle structures are discussed. The first method, direct numerical analysis, involves finite element mathematical modeling of the space shuttle structure in order to use computer programs for dynamic structural analysis. The second method utilizes modal-coupling techniques of experimental verification made by vibrating only spacecraft components and by deducing modes and frequencies of the complete vehicle from results obtained in the component tests.
Harman, Elena; Azzam, Tarek
2018-02-01
This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant's experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Dugardeyn, Jasper; Van Der Straeten, Dominique; Xu, Guowang; Sandra, Pat
2011-05-27
In this paper, automated sample preparation, retention time locked gas chromatography-mass spectrometry (GC-MS) and data analysis methods for the metabolomics study were evaluated. A miniaturized and automated derivatisation method using sequential oximation and silylation was applied to a polar extract of 4 types (2 types×2 ages) of Arabidopsis thaliana, a popular model organism often used in plant sciences and genetics. Automation of the derivatisation process offers excellent repeatability, and the time between sample preparation and analysis was short and constant, reducing artifact formation. Retention time locked (RTL) gas chromatography-mass spectrometry was used, resulting in reproducible retention times and GC-MS profiles. Two approaches were used for data analysis. XCMS followed by principal component analysis (approach 1) and AMDIS deconvolution combined with a commercially available program (Mass Profiler Professional) followed by principal component analysis (approach 2) were compared. Several features that were up- or down-regulated in the different types were detected. Copyright © 2011 Elsevier B.V. All rights reserved.
Efficient computational nonlinear dynamic analysis using modal modification response technique
NASA Astrophysics Data System (ADS)
Marinone, Timothy; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2012-08-01
Generally, structural systems contain nonlinear characteristics in many cases. These nonlinear systems require significant computational resources for solution of the equations of motion. Much of the model, however, is linear where the nonlinearity results from discrete local elements connecting different components together. Using a component mode synthesis approach, a nonlinear model can be developed by interconnecting these linear components with highly nonlinear connection elements. The approach presented in this paper, the Modal Modification Response Technique (MMRT), is a very efficient technique that has been created to address this specific class of nonlinear problem. By utilizing a Structural Dynamics Modification (SDM) approach in conjunction with mode superposition, a significantly smaller set of matrices are required for use in the direct integration of the equations of motion. The approach will be compared to traditional analytical approaches to make evident the usefulness of the technique for a variety of test cases.
ERIC Educational Resources Information Center
Dahmani, Hassen-Reda; Schneeberger, Patricia; Kramer, IJsbrand M.
2009-01-01
The number of experimentally derived structures of cellular components is rapidly expanding, and this phenomenon is accompanied by the development of a new semiotic system for teaching. The infographic approach is shifting from a schematic toward a more realistic representation of cellular components. By realistic we mean artist-prepared or…
Wang, Chun-Hua; Zhong, Yi; Zhang, Yan; Liu, Jin-Ping; Wang, Yue-Fei; Jia, Wei-Na; Wang, Guo-Cai; Li, Zheng; Zhu, Yan; Gao, Xiu-Mei
2016-02-01
Chinese medicine is known to treat complex diseases with multiple components and multiple targets. However, the main effective components and their related key targets and functions remain to be identified. Herein, a network analysis method was developed to identify the main effective components and key targets of a Chinese medicine, Lianhua-Qingwen Formula (LQF). The LQF is commonly used for the prevention and treatment of viral influenza in China. It is composed of 11 herbs, gypsum and menthol with 61 compounds being identified in our previous work. In this paper, these 61 candidate compounds were used to find their related targets and construct the predicted-target (PT) network. An influenza-related protein-protein interaction (PPI) network was constructed and integrated with the PT network. Then the compound-effective target (CET) network and compound-ineffective target network (CIT) were extracted, respectively. A novel approach was developed to identify effective components by comparing CET and CIT networks. As a result, 15 main effective components were identified along with 61 corresponding targets. 7 of these main effective components were further experimentally validated to have antivirus efficacy in vitro. The main effective component-target (MECT) network was further constructed with main effective components and their key targets. Gene Ontology (GO) analysis of the MECT network predicted key functions such as NO production being modulated by the LQF. Interestingly, five effective components were experimentally tested and exhibited inhibitory effects on NO production in the LPS induced RAW 264.7 cell. In summary, we have developed a novel approach to identify the main effective components in a Chinese medicine LQF and experimentally validated some of the predictions.
Sparse principal component analysis in medical shape modeling
NASA Astrophysics Data System (ADS)
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Millán, F; Gracia, S; Sánchez-Martín, F M; Angerri, O; Rousaud, F; Villavicencio, H
2011-03-01
To evaluate a new approach to urinary stone analysis according to the combination of the components. A total of 7949 stones were analysed and their main components and combinations of components were classified according to gender and age. Statistical analysis was performed using the chi-square test. Calcium oxalate monohydrate (COM) was the most frequent component in both males (39%) and females (37.4%), followed by calcium oxalate dihydrate (COD) (28%) and uric acid (URI) (14.6%) in males and by phosphate (PHO) (22.2%) and COD (19.6%) in females (p=0.0001). In young people, COD and PHO were the most frequent components in males and females respectively (p=0.0001). In older patients, COM and URI (in that order) were the most frequent components in both genders (p=0.0001). COM is oxalate dependent and is related to diets with a high oxalate content and low water intake. The progressive increase in URI with age is related mainly to overweight and metabolic syndrome. Regarding the combinations of components, the most frequent were COM (26.3%), COD+Apatite (APA) (15.5%), URI (10%) and COM+COD (7.5%) (p=0.0001). This study reports not only the composition of stones but also the main combinations of components according to age and gender. The results prove that stone composition is related to the changes in dietary habits and life-style that occur over a lifetime, and the morphological structure of stones is indicative of the aetiopathogenic mechanisms. Copyright © 2010 AEU. Published by Elsevier Espana. All rights reserved.
Deflection Analysis of the Space Shuttle External Tank Door Drive Mechanism
NASA Technical Reports Server (NTRS)
Tosto, Michael A.; Trieu, Bo C.; Evernden, Brent A.; Hope, Drew J.; Wong, Kenneth A.; Lindberg, Robert E.
2008-01-01
Upon observing an abnormal closure of the Space Shuttle s External Tank Doors (ETD), a dynamic model was created in MSC/ADAMS to conduct deflection analyses of the Door Drive Mechanism (DDM). For a similar analysis, the traditional approach would be to construct a full finite element model of the mechanism. The purpose of this paper is to describe an alternative approach that models the flexibility of the DDM using a lumped parameter approximation to capture the compliance of individual parts within the drive linkage. This approach allows for rapid construction of a dynamic model in a time-critical setting, while still retaining the appropriate equivalent stiffness of each linkage component. As a validation of these equivalent stiffnesses, finite element analysis (FEA) was used to iteratively update the model towards convergence. Following this analysis, deflections recovered from the dynamic model can be used to calculate stress and classify each component s deformation as either elastic or plastic. Based on the modeling assumptions used in this analysis and the maximum input forcing condition, two components in the DDM show a factor of safety less than or equal to 0.5. However, to accurately evaluate the induced stresses, additional mechanism rigging information would be necessary to characterize the input forcing conditions. This information would also allow for the classification of stresses as either elastic or plastic.
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Gundogdu, Aycan; Nalbantoglu, Ufuk
2017-04-01
A short while ago, the human genome and microbiome were analysed simultaneously for the first time as a multi-omic approach. The analyses of heterogeneous population cohorts showed that microbiome components were associated with human genome variations. In-depth analysis of these results reveals that the majority of those relationships are between immune pathways and autoimmune disease-associated microbiome components. Thus, it can be hypothesized that autoimmunity may be associated with homeostatic disequilibrium of the human-microbiome interactome. Further analysis of human genome-human microbiome relationships in disease contexts with tailored systems biology approaches may yield insights into disease pathogenesis and prognosis.
Nalbantoglu, Ufuk
2017-01-01
A short while ago, the human genome and microbiome were analysed simultaneously for the first time as a multi-omic approach. The analyses of heterogeneous population cohorts showed that microbiome components were associated with human genome variations. In-depth analysis of these results reveals that the majority of those relationships are between immune pathways and autoimmune disease-associated microbiome components. Thus, it can be hypothesized that autoimmunity may be associated with homeostatic disequilibrium of the human-microbiome interactome. Further analysis of human genome–human microbiome relationships in disease contexts with tailored systems biology approaches may yield insights into disease pathogenesis and prognosis. PMID:28785422
Vergara, Victor M; Ulloa, Alvaro; Calhoun, Vince D; Boutte, David; Chen, Jiayu; Liu, Jingyu
2014-09-01
Multi-modal data analysis techniques, such as the Parallel Independent Component Analysis (pICA), are essential in neuroscience, medical imaging and genetic studies. The pICA algorithm allows the simultaneous decomposition of up to two data modalities achieving better performance than separate ICA decompositions and enabling the discovery of links between modalities. However, advances in data acquisition techniques facilitate the collection of more than two data modalities from each subject. Examples of commonly measured modalities include genetic information, structural magnetic resonance imaging (MRI) and functional MRI. In order to take full advantage of the available data, this work extends the pICA approach to incorporate three modalities in one comprehensive analysis. Simulations demonstrate the three-way pICA performance in identifying pairwise links between modalities and estimating independent components which more closely resemble the true sources than components found by pICA or separate ICA analyses. In addition, the three-way pICA algorithm is applied to real experimental data obtained from a study that investigate genetic effects on alcohol dependence. Considered data modalities include functional MRI (contrast images during alcohol exposure paradigm), gray matter concentration images from structural MRI and genetic single nucleotide polymorphism (SNP). The three-way pICA approach identified links between a SNP component (pointing to brain function and mental disorder associated genes, including BDNF, GRIN2B and NRG1), a functional component related to increased activation in the precuneus area, and a gray matter component comprising part of the default mode network and the caudate. Although such findings need further verification, the simulation and in-vivo results validate the three-way pICA algorithm presented here as a useful tool in biomedical data fusion applications. Copyright © 2014 Elsevier Inc. All rights reserved.
The economics of project analysis: Optimal investment criteria and methods of study
NASA Technical Reports Server (NTRS)
Scriven, M. C.
1979-01-01
Insight is provided toward the development of an optimal program for investment analysis of project proposals offering commercial potential and its components. This involves a critique of economic investment criteria viewed in relation to requirements of engineering economy analysis. An outline for a systems approach to project analysis is given Application of the Leontief input-output methodology to analysis of projects involving multiple processes and products is investigated. Effective application of elements of neoclassical economic theory to investment analysis of project components is demonstrated. Patterns of both static and dynamic activity levels are incorporated.
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
Vibration signature analysis of multistage gear transmission
NASA Technical Reports Server (NTRS)
Choy, F. K.; Tu, Y. K.; Savage, M.; Townsend, D. P.
1989-01-01
An analysis is presented for multistage multimesh gear transmission systems. The analysis predicts the overall system dynamics and the transmissibility to the gear box or the enclosed structure. The modal synthesis approach of the analysis treats the uncoupled lateral/torsional model characteristics of each stage or component independently. The vibration signature analysis evaluates the global dynamics coupling in the system. The method synthesizes the interaction of each modal component or stage with the nonlinear gear mesh dynamics and the modal support geometry characteristics. The analysis simulates transient and steady state vibration events to determine the resulting torque variations, speeds, changes, rotor imbalances, and support gear box motion excitations. A vibration signature analysis examines the overall dynamic characteristics of the system, and the individual model component responses. The gear box vibration analysis also examines the spectral characteristics of the support system.
Dynamic analysis methods for detecting anomalies in asynchronously interacting systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Akshat; Solis, John Hector; Matschke, Benjamin
2014-01-01
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the needmore » to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.« less
Xu, J; Durand, L G; Pibarot, P
2001-03-01
The objective of this paper is to adapt and validate a nonlinear transient chirp signal modeling approach for the analysis and synthesis of overlapping aortic (A2) and pulmonary (P2) components of the second heart sound (S2). The approach is based on the time-frequency representation of multicomponent signals for estimating and reconstructing the instantaneous phase and amplitude functions of each component. To evaluate the accuracy of the approach, a simulated S2 with A2 and P2 components having different overlapping intervals (5-30 ms) was synthesized. The simulation results show that the technique is very effective for extracting the two components, even in the presence of noise (-15 dB). The normalized root-mean-squared error between the original A2 and P2 components and their reconstructed versions varied between 1% and 6%, proportionally to the duration of the overlapping interval, and it increased by less than 2% in the presence of noise. The validated technique was then applied to S2 components recorded in pigs under normal or high pulmonary artery pressures. The results show that this approach can successfully isolate and extract overlapping A2 and P2 components from successive S2 recordings obtained from different heartbeats of the same animal as well from different animals.
Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš
2016-01-01
Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis-related components and tumor-suppressor genes, suggesting that this combinatorial perturbation may lead to a better target for decreasing cell proliferation and inducing apoptosis. Finally, our approach shows a potential to identify and prioritize therapeutic targets through systemic perturbation analysis of large-scale computational models of signal transduction. Although some components of the presented computational results have been validated against independent gene expression data sets, more laboratory experiments are warranted to more comprehensively validate the presented results. PMID:26904540
Planning for Cost Effectiveness.
ERIC Educational Resources Information Center
Schlaebitz, William D.
1984-01-01
A heat pump life-cycle cost analysis is used to explain the technique. Items suggested for the life-cycle analysis approach include lighting, longer-life batteries, site maintenance, and retaining experts to inspect specific building components. (MLF)
Advanced methods in NDE using machine learning approaches
NASA Astrophysics Data System (ADS)
Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank
2018-04-01
Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.
Image edge detection based tool condition monitoring with morphological component analysis.
Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng
2017-07-01
The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The chronnectome: time-varying connectivity networks as the next frontier in fMRI data discovery.
Calhoun, Vince D; Miller, Robyn; Pearlson, Godfrey; Adalı, Tulay
2014-10-22
Recent years have witnessed a rapid growth of interest in moving functional magnetic resonance imaging (fMRI) beyond simple scan-length averages and into approaches that capture time-varying properties of connectivity. In this Perspective we use the term "chronnectome" to describe metrics that allow a dynamic view of coupling. In the chronnectome, coupling refers to possibly time-varying levels of correlated or mutually informed activity between brain regions whose spatial properties may also be temporally evolving. We primarily focus on multivariate approaches developed in our group and review a number of approaches with an emphasis on matrix decompositions such as principle component analysis and independent component analysis. We also discuss the potential these approaches offer to improve characterization and understanding of brain function. There are a number of methodological directions that need to be developed further, but chronnectome approaches already show great promise for the study of both the healthy and the diseased brain.
Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina
2010-07-02
This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.
Li, Hong Zhi; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min
2011-01-01
We propose a generalized regression neural network (GRNN) approach based on grey relational analysis (GRA) and principal component analysis (PCA) (GP-GRNN) to improve the accuracy of density functional theory (DFT) calculation for homolysis bond dissociation energies (BDE) of Y-NO bond. As a demonstration, this combined quantum chemistry calculation with the GP-GRNN approach has been applied to evaluate the homolysis BDE of 92 Y-NO organic molecules. The results show that the ull-descriptor GRNN without GRA and PCA (F-GRNN) and with GRA (G-GRNN) approaches reduce the root-mean-square (RMS) of the calculated homolysis BDE of 92 organic molecules from 5.31 to 0.49 and 0.39 kcal mol(-1) for the B3LYP/6-31G (d) calculation. Then the newly developed GP-GRNN approach further reduces the RMS to 0.31 kcal mol(-1). Thus, the GP-GRNN correction on top of B3LYP/6-31G (d) can improve the accuracy of calculating the homolysis BDE in quantum chemistry and can predict homolysis BDE which cannot be obtained experimentally.
Spectral and Geological Characterization of Beach Components in Northern Puerto Rico
NASA Astrophysics Data System (ADS)
Caraballo Álvarez, I. O.; Torres-Perez, J. L.; Barreto, M.
2015-12-01
Understanding how changes in beach components may reflect beach processes is essential since variations along beach profiles can shed light on river and ocean processes influencing beach sedimentation and beachrock formation. It is likely these influences are related to beach proximity within the Río Grande de Manatí river mouth. Therefore, this study focuses on characterizing beach components at two sites in Manatí, Puerto Rico. Playa Machuca and Playa Tombolo, which are separated by eolianites, differ greatly in sediment size, mineralogy, and beachrock morphology. Several approaches were taken to geologically and spectrally characterize main beach components at each site. These approaches included field and microscopic laboratory identification, granulometry, and a comparison between remote sensing reflectance (Rrs) obtained with a field spectroradiometer and pre-existing spectral library signatures. Preliminary results indicate a positive correlation between each method. This study may help explore the possibility of using only Rrs to characterize beach and shallow submarine components for detailed image analysis and management of coastal features.This study focuses on characterizing beach components at two sites in Manatí, Puerto Rico. Playa Machuca and Playa Tombolo, two beaches that are separated by eolianites, differ greatly in sediment size and mineralogy, as well as in beachrock morphology. Understanding how changes in beach components may reflect beach processes is essential, since it is likely that differences are mostly related to each beaches' proximity to the Río Grande de Manatí river mouth. Hence, changes in components along beach profiles can shed light on the river's and the ocean's influence on beach sedimentation and beachrock formation. Several approaches were taken to properly geologically and spectrally characterize the main beach components at each site. These approaches included field and microscopic laboratory identification, granulometry, and a comparison between remote sensing reflectance (Rrs) obtained with a field spectroradiometer and the ENVI spectral library. Preliminary results show a positive correlation between each method. This study may help explore the possibility of using only Rrs to characterize beach and shallow submarine components for detailed image analysis and management of coastal features.
Joint independent component analysis for simultaneous EEG-fMRI: principle and simulation.
Moosmann, Matthias; Eichele, Tom; Nordby, Helge; Hugdahl, Kenneth; Calhoun, Vince D
2008-03-01
An optimized scheme for the fusion of electroencephalography and event related potentials with functional magnetic resonance imaging (BOLD-fMRI) data should simultaneously assess all available electrophysiologic and hemodynamic information in a common data space. In doing so, it should be possible to identify features of latent neural sources whose trial-to-trial dynamics are jointly reflected in both modalities. We present a joint independent component analysis (jICA) model for analysis of simultaneous single trial EEG-fMRI measurements from multiple subjects. We outline the general idea underlying the jICA approach and present results from simulated data under realistic noise conditions. Our results indicate that this approach is a feasible and physiologically plausible data-driven way to achieve spatiotemporal mapping of event related responses in the human brain.
A distributed finite-element modeling and control approach for large flexible structures
NASA Technical Reports Server (NTRS)
Young, K. D.
1989-01-01
An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Timbie, Peter T.; Bunn, Emory F.
In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approachmore » can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.« less
The Potential of Multivariate Analysis in Assessing Students' Attitude to Curriculum Subjects
ERIC Educational Resources Information Center
Gaotlhobogwe, Michael; Laugharne, Janet; Durance, Isabelle
2011-01-01
Background: Understanding student attitudes to curriculum subjects is central to providing evidence-based options to policy makers in education. Purpose: We illustrate how quantitative approaches used in the social sciences and based on multivariate analysis (categorical Principal Components Analysis, Clustering Analysis and General Linear…
Using qualitative comparative analysis in a systematic review of a complex intervention.
Kahwati, Leila; Jacobs, Sara; Kane, Heather; Lewis, Megan; Viswanathan, Meera; Golin, Carol E
2016-05-04
Systematic reviews evaluating complex interventions often encounter substantial clinical heterogeneity in intervention components and implementation features making synthesis challenging. Qualitative comparative analysis (QCA) is a non-probabilistic method that uses mathematical set theory to study complex phenomena; it has been proposed as a potential method to complement traditional evidence synthesis in reviews of complex interventions to identify key intervention components or implementation features that might explain effectiveness or ineffectiveness. The objective of this study was to describe our approach in detail and examine the suitability of using QCA within the context of a systematic review. We used data from a completed systematic review of behavioral interventions to improve medication adherence to conduct two substantive analyses using QCA. The first analysis sought to identify combinations of nine behavior change techniques/components (BCTs) found among effective interventions, and the second analysis sought to identify combinations of five implementation features (e.g., agent, target, mode, time span, exposure) found among effective interventions. For each substantive analysis, we reframed the review's research questions to be designed for use with QCA, calibrated sets (i.e., transformed raw data into data used in analysis), and identified the necessary and/or sufficient combinations of BCTs and implementation features found in effective interventions. Our application of QCA for each substantive analysis is described in detail. We extended the original review findings by identifying seven combinations of BCTs and four combinations of implementation features that were sufficient for improving adherence. We found reasonable alignment between several systematic review steps and processes used in QCA except that typical approaches to study abstraction for some intervention components and features did not support a robust calibration for QCA. QCA was suitable for use within a systematic review of medication adherence interventions and offered insights beyond the single dimension stratifications used in the original completed review. Future prospective use of QCA during a review is needed to determine the optimal way to efficiently integrate QCA into existing approaches to evidence synthesis of complex interventions.
Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A
2006-01-01
Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151
Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy.
Manceau, Alain; Marcus, Matthew; Lenoir, Thomas
2014-09-01
Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.
Previdelli, Ágatha Nogueira; de Andrade, Samantha Caesar; Fisberg, Regina Mara; Marchioni, Dirce Maria
2016-09-23
The use of dietary patterns to assess dietary intake has become increasingly common in nutritional epidemiology studies due to the complexity and multidimensionality of the diet. Currently, two main approaches have been widely used to assess dietary patterns: data-driven and hypothesis-driven analysis. Since the methods explore different angles of dietary intake, using both approaches simultaneously might yield complementary and useful information; thus, we aimed to use both approaches to gain knowledge of adolescents' dietary patterns. Food intake from a cross-sectional survey with 295 adolescents was assessed by 24 h dietary recall (24HR). In hypothesis-driven analysis, based on the American National Cancer Institute method, the usual intake of Brazilian Healthy Eating Index Revised components were estimated. In the data-driven approach, the usual intake of foods/food groups was estimated by the Multiple Source Method. In the results, hypothesis-driven analysis showed low scores for Whole grains, Total vegetables, Total fruit and Whole fruits), while, in data-driven analysis, fruits and whole grains were not presented in any pattern. High intakes of sodium, fats and sugars were observed in hypothesis-driven analysis with low total scores for Sodium, Saturated fat and SoFAA (calories from solid fat, alcohol and added sugar) components in agreement, while the data-driven approach showed the intake of several foods/food groups rich in these nutrients, such as butter/margarine, cookies, chocolate powder, whole milk, cheese, processed meat/cold cuts and candies. In this study, using both approaches at the same time provided consistent and complementary information with regard to assessing the overall dietary habits that will be important in order to drive public health programs, and improve their efficiency to monitor and evaluate the dietary patterns of populations.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
78 FR 70398 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-25
... System Evaluation-Related Interview Data Collection. OMB Control Number: 2130-0574. Type of Request... approaches to improving safety, FRA has instituted the Confidential Close Call Reporting System (C\\3\\RS). The... reporting component, and a problem analysis/solution component. C\\3\\RS is expected to affect safety in two...
Enhancing the Impact of Quality Points in Interteaching
ERIC Educational Resources Information Center
Rosales, Rocío; Soldner, James L.; Crimando, William
2014-01-01
Interteaching is a classroom instruction approach based on behavioral principles that offers increased flexibility to instructors. There are several components of interteaching that may contribute to its demonstrated efficacy. In a prior analysis of one of these components, the quality points contingency, no significant difference was reported in…
Multivariate Statistical Analysis of MSL APXS Bulk Geochemical Data
NASA Astrophysics Data System (ADS)
Hamilton, V. E.; Edwards, C. S.; Thompson, L. M.; Schmidt, M. E.
2014-12-01
We apply cluster and factor analyses to bulk chemical data of 130 soil and rock samples measured by the Alpha Particle X-ray Spectrometer (APXS) on the Mars Science Laboratory (MSL) rover Curiosity through sol 650. Multivariate approaches such as principal components analysis (PCA), cluster analysis, and factor analysis compliment more traditional approaches (e.g., Harker diagrams), with the advantage of simultaneously examining the relationships between multiple variables for large numbers of samples. Principal components analysis has been applied with success to APXS, Pancam, and Mössbauer data from the Mars Exploration Rovers. Factor analysis and cluster analysis have been applied with success to thermal infrared (TIR) spectral data of Mars. Cluster analyses group the input data by similarity, where there are a number of different methods for defining similarity (hierarchical, density, distribution, etc.). For example, without any assumptions about the chemical contributions of surface dust, preliminary hierarchical and K-means cluster analyses clearly distinguish the physically adjacent rock targets Windjana and Stephen as being distinctly different than lithologies observed prior to Curiosity's arrival at The Kimberley. In addition, they are separated from each other, consistent with chemical trends observed in variation diagrams but without requiring assumptions about chemical relationships. We will discuss the variation in cluster analysis results as a function of clustering method and pre-processing (e.g., log transformation, correction for dust cover) and implications for interpreting chemical data. Factor analysis shares some similarities with PCA, and examines the variability among observed components of a dataset so as to reveal variations attributable to unobserved components. Factor analysis has been used to extract the TIR spectra of components that are typically observed in mixtures and only rarely in isolation; there is the potential for similar results with data from APXS. These techniques offer new ways to understand the chemical relationships between the materials interrogated by Curiosity, and potentially their relation to materials observed by APXS instruments on other landed missions.
Feng, Xiao-Liang; He, Yun-biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei
2013-01-01
Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria. PMID:24286016
Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei
2013-01-01
Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Bolduc, Sean; Harman, Rebecca
2017-01-01
A composite fuselage aircraft forward section was inspected with flash thermography. The fuselage section is 24 feet long and approximately 8 feet in diameter. The structure is primarily configured with a composite sandwich structure of carbon fiber face sheets with a Nomex(Trademark) honeycomb core. The outer surface area was inspected. The thermal data consisted of 477 data sets totaling in size of over 227 Gigabytes. Principal component analysis (PCA) was used to process the data sets for substructure and defect detection. A fixed eigenvector approach using a global covariance matrix was used and compared to a varying eigenvector approach. The fixed eigenvector approach was demonstrated to be a practical analysis method for the detection and interpretation of various defects such as paint thickness variation, possible water intrusion damage, and delamination damage. In addition, inspection considerations are discussed including coordinate system layout, manipulation of the fuselage section, and the manual scanning technique used for full coverage.
Review of fatigue and fracture research at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Everett, Richard A., Jr.
1988-01-01
Most dynamic components in helicopters are designed with a safe-life constant-amplitude testing approach that has not changed in many years. In contrast, the fatigue methodology in other industries has advanced significantly in the last two decades. Recent research at the NASA Langley Research Center and the U.S. Army Aerostructures Directorate at Langley are reviewed relative to fatigue and fracture design methodology for metallic components. Most of the Langley research was directed towards the damage tolerance design approach, but some work was done that is applicable to the safe-life approach. In the areas of testing, damage tolerance concepts are concentrating on the small-crack effect in crack growth and measurement of crack opening stresses. Tests were conducted to determine the effects of a machining scratch on the fatigue life of a high strength steel. In the area of analysis, work was concentrated on developing a crack closure model that will predict fatigue life under spectrum loading for several different metal alloys including a high strength steel that is often used in the dynamic components of helicopters. Work is also continuing in developing a three-dimensional, finite-element stress analysis for cracked and uncracked isotropic and anisotropic structures. A numerical technique for solving simultaneous equations called the multigrid method is being pursued to enhance the solution schemes in both the finite-element analysis and the boundary element analysis. Finally, a fracture mechanics project involving an elastic-plastic finite element analysis of J-resistance curve is also being pursued.
Big-Data RHEED analysis for understanding epitaxial film growth processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasudevan, Rama K; Tselev, Alexander; Baddorf, Arthur P
Reflection high energy electron diffraction (RHEED) has by now become a standard tool for in-situ monitoring of film growth by pulsed laser deposition and molecular beam epitaxy. Yet despite the widespread adoption and wealth of information in RHEED image, most applications are limited to observing intensity oscillations of the specular spot, and much additional information on growth is discarded. With ease of data acquisition and increased computation speeds, statistical methods to rapidly mine the dataset are now feasible. Here, we develop such an approach to the analysis of the fundamental growth processes through multivariate statistical analysis of RHEED image sequence.more » This approach is illustrated for growth of LaxCa1-xMnO3 films grown on etched (001) SrTiO3 substrates, but is universal. The multivariate methods including principal component analysis and k-means clustering provide insight into the relevant behaviors, the timing and nature of a disordered to ordered growth change, and highlight statistically significant patterns. Fourier analysis yields the harmonic components of the signal and allows separation of the relevant components and baselines, isolating the assymetric nature of the step density function and the transmission spots from the imperfect layer-by-layer (LBL) growth. These studies show the promise of big data approaches to obtaining more insight into film properties during and after epitaxial film growth. Furthermore, these studies open the pathway to use forward prediction methods to potentially allow significantly more control over growth process and hence final film quality.« less
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
Kharroubi, Adel; Gargouri, Dorra; Baati, Houda; Azri, Chafai
2012-06-01
Concentrations of selected heavy metals (Cd, Pb, Zn, Cu, Mn, and Fe) in surface sediments from 66 sites in both northern and eastern Mediterranean Sea-Boughrara lagoon exchange areas (southeastern Tunisia) were studied in order to understand current metal contamination due to the urbanization and economic development of nearby several coastal regions of the Gulf of Gabès. Multiple approaches were applied for the sediment quality assessment. These approaches were based on GIS coupled with chemometric methods (enrichment factors, geoaccumulation index, principal component analysis, and cluster analysis). Enrichment factors and principal component analysis revealed two distinct groups of metals. The first group corresponded to Fe and Mn derived from natural sources, and the second group contained Cd, Pb, Zn, and Cu originated from man-made sources. For these latter metals, cluster analysis showed two distinct distributions in the selected areas. They were attributed to temporal and spatial variations of contaminant sources input. The geoaccumulation index (I (geo)) values explained that only Cd, Pb, and Cu can be considered as moderate to extreme pollutants in the studied sediments.
Schaper, M M; Detwiler-Okabayashi, K A
1995-01-01
Recently, the sensory and pulmonary irritating properties of ten metalworking fluids (MWF) were assessed using a mouse bioassay. Relative potency of the MWFs was estimated, but it was not possible to identify the component(s) responsible for the the respiratory irritation induced by each MWF. One of the ten fluids, MWF "ET", produced sensory and pulmonary irritation in mice, and it was of moderate potency in comparison to the other nine MWFs. MWF "E" had three major components: tall oil fatty acids (TOFA), sodium sulfonate (SA), and paraffinic oil (PO). In the present study, the sensory and pulmonary irritating properties of these individual components of MWF "E" were evaluated. Mixtures of the three components were also prepared and similarly evaluated. This analysis revealed that the sensory irritation from MWF "E" was largely due to TOFA, whereas SA produced the pulmonary irritation observed with MWF "E". Both TOFA and SA were more potent irritants than was MWF "E", and the potency of TOFA and/or SA was diminished through combination with PO. There was no evidence of synergism of the components when combined to form MWF "E". This approach for identifying the biologically "active" component(s) in a mixture should be useful for other MWFs. Furthermore, the approach should be easily adapted for other applications involving concerns with mixtures.
Comparison of multi-subject ICA methods for analysis of fMRI data
Erhardt, Erik Barry; Rachakonda, Srinivas; Bedrick, Edward; Allen, Elena; Adali, Tülay; Calhoun, Vince D.
2010-01-01
Spatial independent component analysis (ICA) applied to functional magnetic resonance imaging (fMRI) data identifies functionally connected networks by estimating spatially independent patterns from their linearly mixed fMRI signals. Several multi-subject ICA approaches estimating subject-specific time courses (TCs) and spatial maps (SMs) have been developed, however there has not yet been a full comparison of the implications of their use. Here, we provide extensive comparisons of four multi-subject ICA approaches in combination with data reduction methods for simulated and fMRI task data. For multi-subject ICA, the data first undergo reduction at the subject and group levels using principal component analysis (PCA). Comparisons of subject-specific, spatial concatenation, and group data mean subject-level reduction strategies using PCA and probabilistic PCA (PPCA) show that computationally intensive PPCA is equivalent to PCA, and that subject-specific and group data mean subject-level PCA are preferred because of well-estimated TCs and SMs. Second, aggregate independent components are estimated using either noise free ICA or probabilistic ICA (PICA). Third, subject-specific SMs and TCs are estimated using back-reconstruction. We compare several direct group ICA (GICA) back-reconstruction approaches (GICA1-GICA3) and an indirect back-reconstruction approach, spatio-temporal regression (STR, or dual regression). Results show the earlier group ICA (GICA1) approximates STR, however STR has contradictory assumptions and may show mixed-component artifacts in estimated SMs. Our evidence-based recommendation is to use GICA3, introduced here, with subject-specific PCA and noise-free ICA, providing the most robust and accurate estimated SMs and TCs in addition to offering an intuitive interpretation. PMID:21162045
Bottom boundary layer forced by finite amplitude long and short surface waves motions
NASA Astrophysics Data System (ADS)
Elsafty, H.; Lynett, P.
2018-04-01
A multiple-scale perturbation approach is implemented to solve the Navier-Stokes equations while including bottom boundary layer effects under a single wave and under two interacting waves. In this approach, fluid velocities and the pressure field are decomposed into two components: a potential component and a rotational component. In this study, the two components are exist throughout the entire water column and each is scaled with appropriate length and time scales. A one-way coupling between the two components is implemented. The potential component is assumed to be known analytically or numerically a prior, and the rotational component is forced by the potential component. Through order of magnitude analysis, it is found that the leading-order coupling between the two components occurs through the vertical convective acceleration. It is shown that this coupling plays an important role in the bottom boundary layer behavior. Its effect on the results is discussed for different wave-forcing conditions: purely harmonic forcing and impurely harmonic forcing. The approach is then applied to derive the governing equations for the bottom boundary layer developed under two interacting wave motions. Both motions-the shorter and the longer wave-are decomposed into two components, potential and rotational, as it is done in the single wave. Test cases are presented wherein two different wave forcings are simulated: (1) two periodic oscillatory motions and (2) short waves interacting with a solitary wave. The analysis of the two periodic motions indicates that nonlinear effects in the rotational solution may be significant even though nonlinear effects are negligible in the potential forcing. The local differences in the rotational velocity due to the nonlinear vertical convection coupling term are found to be on the order of 30% of the maximum boundary layer velocity for the cases simulated in this paper. This difference is expected to increase with the increase in wave nonlinearity.
The Influence Function of Principal Component Analysis by Self-Organizing Rule.
Higuchi; Eguchi
1998-07-28
This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.
Butler, Rebecca A.
2014-01-01
Stroke aphasia is a multidimensional disorder in which patient profiles reflect variation along multiple behavioural continua. We present a novel approach to separating the principal aspects of chronic aphasic performance and isolating their neural bases. Principal components analysis was used to extract core factors underlying performance of 31 participants with chronic stroke aphasia on a large, detailed battery of behavioural assessments. The rotated principle components analysis revealed three key factors, which we labelled as phonology, semantic and executive/cognition on the basis of the common elements in the tests that loaded most strongly on each component. The phonology factor explained the most variance, followed by the semantic factor and then the executive-cognition factor. The use of principle components analysis rendered participants’ scores on these three factors orthogonal and therefore ideal for use as simultaneous continuous predictors in a voxel-based correlational methodology analysis of high resolution structural scans. Phonological processing ability was uniquely related to left posterior perisylvian regions including Heschl’s gyrus, posterior middle and superior temporal gyri and superior temporal sulcus, as well as the white matter underlying the posterior superior temporal gyrus. The semantic factor was uniquely related to left anterior middle temporal gyrus and the underlying temporal stem. The executive-cognition factor was not correlated selectively with the structural integrity of any particular region, as might be expected in light of the widely-distributed and multi-functional nature of the regions that support executive functions. The identified phonological and semantic areas align well with those highlighted by other methodologies such as functional neuroimaging and neurostimulation. The use of principle components analysis allowed us to characterize the neural bases of participants’ behavioural performance more robustly and selectively than the use of raw assessment scores or diagnostic classifications because principle components analysis extracts statistically unique, orthogonal behavioural components of interest. As such, in addition to improving our understanding of lesion–symptom mapping in stroke aphasia, the same approach could be used to clarify brain–behaviour relationships in other neurological disorders. PMID:25348632
UMAMI: A Recipe for Generating Meaningful Metrics through Holistic I/O Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockwood, Glenn K.; Yoo, Wucherl; Byna, Suren
I/O efficiency is essential to productivity in scientific computing, especially as many scientific domains become more data-intensive. Many characterization tools have been used to elucidate specific aspects of parallel I/O performance, but analyzing components of complex I/O subsystems in isolation fails to provide insight into critical questions: how do the I/O components interact, what are reasonable expectations for application performance, and what are the underlying causes of I/O performance problems? To address these questions while capitalizing on existing component-level characterization tools, we propose an approach that combines on-demand, modular synthesis of I/O characterization data into a unified monitoring and metricsmore » interface (UMAMI) to provide a normalized, holistic view of I/O behavior. We evaluate the feasibility of this approach by applying it to a month-long benchmarking study on two distinct largescale computing platforms. We present three case studies that highlight the importance of analyzing application I/O performance in context with both contemporaneous and historical component metrics, and we provide new insights into the factors affecting I/O performance. By demonstrating the generality of our approach, we lay the groundwork for a production-grade framework for holistic I/O analysis.« less
NASA Astrophysics Data System (ADS)
Freeman, Mary Pyott
ABSTRACT An Analysis of Tree Mortality Using High Resolution Remotely-Sensed Data for Mixed-Conifer Forests in San Diego County by Mary Pyott Freeman The montane mixed-conifer forests of San Diego County are currently experiencing extensive tree mortality, which is defined as dieback where whole stands are affected. This mortality is likely the result of the complex interaction of many variables, such as altered fire regimes, climatic conditions such as drought, as well as forest pathogens and past management strategies. Conifer tree mortality and its spatial pattern and change over time were examined in three components. In component 1, two remote sensing approaches were compared for their effectiveness in delineating dead trees, a spatial contextual approach and an OBIA (object based image analysis) approach, utilizing various dates and spatial resolutions of airborne image data. For each approach transforms and masking techniques were explored, which were found to improve classifications, and an object-based assessment approach was tested. In component 2, dead tree maps produced by the most effective techniques derived from component 1 were utilized for point pattern and vector analyses to further understand spatio-temporal changes in tree mortality for the years 1997, 2000, 2002, and 2005 for three study areas: Palomar, Volcan and Laguna mountains. Plot-based fieldwork was conducted to further assess mortality patterns. Results indicate that conifer mortality was significantly clustered, increased substantially between 2002 and 2005, and was non-random with respect to tree species and diameter class sizes. In component 3, multiple environmental variables were used in Generalized Linear Model (GLM-logistic regression) and decision tree classifier model development, revealing the importance of climate and topographic factors such as precipitation and elevation, in being able to predict areas of high risk for tree mortality. The results from this study highlight the importance of multi-scale spatial as well as temporal analyses, in order to understand mixed-conifer forest structure, dynamics, and processes of decline, which can lead to more sustainable management of forests with continued natural and anthropogenic disturbance.
A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.
Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela
2017-01-01
The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run
Armeanu, Daniel; Lache, Leonard; Panait, Mirela
2017-01-01
The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
NASA Astrophysics Data System (ADS)
Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.
The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
Gu, Haiwei; Pan, Zhengzheng; Xi, Bowei; Asiago, Vincent; Musselman, Brian; Raftery, Daniel
2011-02-07
Nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS) are the two most commonly used analytical tools in metabolomics, and their complementary nature makes the combination particularly attractive. A combined analytical approach can improve the potential for providing reliable methods to detect metabolic profile alterations in biofluids or tissues caused by disease, toxicity, etc. In this paper, (1)H NMR spectroscopy and direct analysis in real time (DART)-MS were used for the metabolomics analysis of serum samples from breast cancer patients and healthy controls. Principal component analysis (PCA) of the NMR data showed that the first principal component (PC1) scores could be used to separate cancer from normal samples. However, no such obvious clustering could be observed in the PCA score plot of DART-MS data, even though DART-MS can provide a rich and informative metabolic profile. Using a modified multivariate statistical approach, the DART-MS data were then reevaluated by orthogonal signal correction (OSC) pretreated partial least squares (PLS), in which the Y matrix in the regression was set to the PC1 score values from the NMR data analysis. This approach, and a similar one using the first latent variable from PLS-DA of the NMR data resulted in a significant improvement of the separation between the disease samples and normals, and a metabolic profile related to breast cancer could be extracted from DART-MS. The new approach allows the disease classification to be expressed on a continuum as opposed to a binary scale and thus better represents the disease and healthy classifications. An improved metabolic profile obtained by combining MS and NMR by this approach may be useful to achieve more accurate disease detection and gain more insight regarding disease mechanisms and biology. Copyright © 2010 Elsevier B.V. All rights reserved.
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
Evaluation of Parallel Analysis Methods for Determining the Number of Factors
ERIC Educational Resources Information Center
Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.
2010-01-01
Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…
NASA Astrophysics Data System (ADS)
Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid
2017-05-01
Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.
Domain adaptation via transfer component analysis.
Pan, Sinno Jialin; Tsang, Ivor W; Kwok, James T; Yang, Qiang
2011-02-01
Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
A New Approach for Quantitative Evaluation of Ultrasonic Wave Attenuation in Composites
NASA Astrophysics Data System (ADS)
Ni, Qing-Qing; Li, Ran; Xia, Hong
2017-02-01
When ultrasonic waves propagate in composite materials, the propagation behaviors result from the combination effects of various factors, such as material anisotropy and viscoelastic property, internal microstructure and defects, incident wave characteristics and interface condition between composite components. It is essential to make it clear how these factors affect the ultrasonic wave propagation and attenuation characteristics, and how they mutually interact on each other. In the present paper, based on a newly developed time-domain finite element analysis code, PZflex, a unique approach for clarifying the detailed influence mechanism of aforementioned factors is proposed, in which each attenuation component can be extracted from the overall attenuation and analyzed respectively. By taking into consideration the interrelation between each individual attenuation component, the variation behaviors of each component and internal dynamic stress distribution against material anisotropy and matrix viscosity are separately and quantitatively evaluated. From the detailed analysis results of each attenuation component, the energy dissipation at interface is a major component in ultrasonic wave attenuation characteristics, which can provide a maximum contribution rate of 68.2 % to the overall attenuation, and each attenuation component is closely related to the material anisotropy and viscoelasticity. The results clarify the correlation between ultrasonic wave propagation characteristics and material viscoelastic properties, which will be useful in the further development of ultrasonic technology in defect detection.
A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components
NASA Astrophysics Data System (ADS)
Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun
2017-10-01
This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.
Students' Perceptions of Teaching and Learning Practices: A Principal Component Approach
ERIC Educational Resources Information Center
Mukorera, Sophia; Nyatanga, Phocenah
2017-01-01
Students' attendance and engagement with teaching and learning practices is perceived as a critical element for academic performance. Even with stipulated attendance policies, students still choose not to engage. The study employed a principal component analysis to analyze first- and second-year students' perceptions of the importance of the 12…
An Analysis of Viewpoints on Education for Responsible Consumption in Higher Education
ERIC Educational Resources Information Center
Gombert-Courvoisier, Sandrine; Sennès, Vincent; Ribeyre, Francis
2014-01-01
Purpose: This paper aims to present the authors' views on education for responsible consumption (ERC) in higher education, and deals with major components to be considered to educate students for responsible consumption. Design/methodology/approach: There are three components that seem relevant for ERC in higher education: taught courses should be…
Callings in Career: A Typological Approach to Essential and Optional Components
ERIC Educational Resources Information Center
Hirschi, Andreas
2011-01-01
A sense of calling in career is supposed to have positive implications for individuals and organizations but current theoretical development is plagued with incongruent conceptualizations of what does or does not constitute a calling. The present study used cluster analysis to identify essential and optional components of a presence of calling…
USDA-ARS?s Scientific Manuscript database
We studied the ability of Tenebrio molitor L. (Coleoptera: Tenebrionidae) to self-select optimal ratios of two dietary components to approach nutritional balance and maximum fitness. Life table analysis was used to determine the fitness of T. molitor developing in diet mixtures comprised of four dif...
ERIC Educational Resources Information Center
Tighe, Elizabeth L.; Spencer, Mercedes; Schatschneider, Christopher
2015-01-01
This study rank ordered the contributive importance of several predictors of listening comprehension for third, seventh, and tenth graders. Principal components analyses revealed that a three-factor solution with fluency, reasoning, and working memory components provided the best fit across grade levels. Dominance analyses indicated that fluency…
A new multicriteria risk mapping approach based on a multiattribute frontier concept
Denys Yemshanov; Frank H. Koch; Yakov Ben-Haim; Marla Downing; Frank Sapio; Marty Siltanen
2013-01-01
Invasive species risk maps provide broad guidance on where to allocate resources for pest monitoring and regulation, but they often present individual risk components (such as climatic suitability, host abundance, or introduction potential) as independent entities. These independent risk components are integrated using various multicriteria analysis techniques that...
Repressing the effects of variable speed harmonic orders in operational modal analysis
NASA Astrophysics Data System (ADS)
Randall, R. B.; Coats, M. D.; Smith, W. A.
2016-10-01
Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Fabrication and Testing of Ceramic Matrix Composite Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Effinger, M. R.; Clinton, R. C., Jr.; Dennis, J.; Elam, S.; Genge, G.; Eckel, A.; Jaskowiak, M. H.; Kiser, J. D.; Lang, J.
2001-01-01
NASA has established goals for Second and Third Generation Reusable Launch Vehicles. Emphasis has been placed on significantly improving safety and decreasing the cost of transporting payloads to orbit. Ceramic matrix composites (CMC) components are being developed by NASA to enable significant increases in safety and engineer performance, while reducing costs. The development of the following CMC components are being pursued by NASA: (1) Simplex CMC Blisk; (2) Cooled CMC Nozzle Ramps; (3) Cooled CMC Thrust Chambers; and (4) CMC Gas Generator. These development efforts are application oriented, but have a strong underpinning of fundamental understanding of processing-microstructure-property relationships relative to structural analyses, nondestructive characterization, and material behavior analysis at the coupon and component and system operation levels. As each effort matures, emphasis will be placed on optimizing and demonstrating material/component durability, ideally using a combined Building Block Approach and Build and Bust Approach.
Modeling energy/economy interactions for conservation and renewable energy-policy analysis
NASA Astrophysics Data System (ADS)
Groncki, P. J.
Energy policy and the implications for policy analysis and the methodological tools are discussed. The evolution of one methodological approach and the combined modeling system of the component models, their evolution in response to changing analytic needs, and the development of the integrated framework are reported. The analyses performed over the past several years are summarized. The current philosophy behind energy policy is discussed and compared to recent history. Implications for current policy analysis and methodological approaches are drawn.
Wang, Zhangjie; Li, Daoyuan; Sun, Xiaojun; Bai, Xue; Jin, Lan; Chi, Lianli
2014-04-15
Low molecular weight heparins (LMWHs) are important artificial preparations from heparin polysaccharide and are widely used as anticoagulant drugs. To analyze the structure and composition of LMWHs, identification and quantitation of their natural and modified building blocks are indispensable. We have established a novel reversed-phase high-performance liquid chromatography-diode array detection-electrospray ionization-mass spectrometry approach for compositional analysis of LMWHs. After being exhaustively digested and labeled with 2-aminoacridone, the structural motifs constructing LMWHs, including 17 components from dalteparin and 15 components from enoxaparin, were well separated, identified, and quantified. Besides the eight natural heparin disaccharides, many characteristic structures from dalteparin and enoxaparin, such as modified structures from the reducing end and nonreducing end, 3-O-sulfated tetrasaccharides, and trisaccharides, have been unambiguously identified based on their retention time and mass spectra. Compared with the traditional heparin compositional analysis methods, the approach described here is not only robust but also comprehensive because it is capable of identifying and quantifying nearly all components from lyase digests of LMWHs. Copyright © 2014 Elsevier Inc. All rights reserved.
Linear unmixing of multidate hyperspectral imagery for crop yield estimation
USDA-ARS?s Scientific Manuscript database
In this paper, we have evaluated an unsupervised unmixing approach, vertex component analysis (VCA), for the application of crop yield estimation. The results show that abundance maps of the vegetation extracted by the approach are strongly correlated to the yield data (the correlation coefficients ...
High Fidelity System Simulation of Multiple Components in Support of the UEET Program
NASA Technical Reports Server (NTRS)
Plybon, Ronald C.; VanDeWall, Allan; Sampath, Rajiv; Balasubramaniam, Mahadevan; Mallina, Ramakrishna; Irani, Rohinton
2006-01-01
The High Fidelity System Simulation effort has addressed various important objectives to enable additional capability within the NPSS framework. The scope emphasized High Pressure Turbine and High Pressure Compressor components. Initial effort was directed at developing and validating intermediate fidelity NPSS model using PD geometry and extended to high-fidelity NPSS model by overlaying detailed geometry to validate CFD against rig data. Both "feedforward" and feedback" approaches of analysis zooming was employed to enable system simulation capability in NPSS. These approaches have certain benefits and applicability in terms of specific applications "feedback" zooming allows the flow-up of information from high-fidelity analysis to be used to update the NPSS model results by forcing the NPSS solver to converge to high-fidelity analysis predictions. This apporach is effective in improving the accuracy of the NPSS model; however, it can only be used in circumstances where there is a clear physics-based strategy to flow up the high-fidelity analysis results to update the NPSS system model. "Feed-forward" zooming approach is more broadly useful in terms of enabling detailed analysis at early stages of design for a specified set of critical operating points and using these analysis results to drive design decisions early in the development process.
Zhang, Zhiming; Ouyang, Zhiyun; Xiao, Yi; Xiao, Yang; Xu, Weihua
2017-06-01
Increasing exploitation of karst resources is causing severe environmental degradation because of the fragility and vulnerability of karst areas. By integrating principal component analysis (PCA) with annual seasonal trend analysis (ASTA), this study assessed karst rocky desertification (KRD) within a spatial context. We first produced fractional vegetation cover (FVC) data from a moderate-resolution imaging spectroradiometer normalized difference vegetation index using a dimidiate pixel model. Then, we generated three main components of the annual FVC data using PCA. Subsequently, we generated the slope image of the annual seasonal trends of FVC using median trend analysis. Finally, we combined the three PCA components and annual seasonal trends of FVC with the incidence of KRD for each type of carbonate rock to classify KRD into one of four categories based on K-means cluster analysis: high, moderate, low, and none. The results of accuracy assessments indicated that this combination approach produced greater accuracy and more reasonable KRD mapping than the average FVC based on the vegetation coverage standard. The KRD map for 2010 indicated that the total area of KRD was 78.76 × 10 3 km 2 , which constitutes about 4.06% of the eight southwest provinces of China. The largest KRD areas were found in Yunnan province. The combined PCA and ASTA approach was demonstrated to be an easily implemented, robust, and flexible method for the mapping and assessment of KRD, which can be used to enhance regional KRD management schemes or to address assessment of other environmental issues.
Sustainable Design Approach: A case study of BIM use
NASA Astrophysics Data System (ADS)
Abdelhameed, Wael
2017-11-01
Achieving sustainable design in areas such as energy-efficient design depends largely on the accuracy of the analysis performed after the design is completed with all its components and material details. There are different analysis approaches and methods that predict relevant values and metrics such as U value, energy use and energy savings. Although certain differences in the accuracy of these approaches and methods have been recorded, this research paper does not focus on such matter, where determining the reason for discrepancies between those approaches and methods is difficult, because all error sources act simultaneously. The research paper rather introduces an approach through which BIM, building information modelling, can be utilised during the initial phases of the designing process, by analysing the values and metrics of sustainable design before going into the design details of a building. Managing all of the project drawings in a single file, BIM -building information modelling- is well known as one digital platform that offers a multidisciplinary detailed design -AEC model (Barison and Santos, 2010, Welle et.al., 2011). The paper presents in general BIM use in the early phases of the design process, in order to achieve certain required areas of sustainable design. The paper proceeds to introduce BIM use in specific areas such as site selection, wind velocity and building orientation, in terms of reaching the farther possible sustainable solution. In the initial phases of designing, material details and building components are not fully specified or selected yet. The designer usually focuses on zoning, topology, circulations, and other design requirements. The proposed approach employs the strategies and analysis of BIM use during those initial design phases in order to have the analysis and results of each solution or alternative design. The stakeholders and designers would have a better effective decision making process with a full clarity of each alternative's consequences. The architect would settle down and proceed in the alternative design of the best sustainable analysis. In later design stages, using the sustainable types of materials such as insulation, cladding, etc., and applying sustainable building components such as doors, windows, etc. would add more improvements and enhancements in reaching better values and metrics. The paper describes the methodology of this design approach through BIM strategies adopted in design creation. Case studies of architectural designs are used to highlight the details and benefits of this proposed approach.
Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.
Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G
2018-01-01
Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.
The Psycho-Lexical Approach in Exploring the Field of Values: A Reply to Schwartz.
Raad, Boele De; Timmerman, Marieke E; Morales-Vives, Fabia; Renner, Walter; Barelds, Dick P H; Pieter Van Oudenhoven, Jan
2017-04-01
We reply to each of the issues raised by Schwartz in a commentary on our article on a comparison of value taxonomies. We discuss two approaches, mentioned in that commentary, the lexical approach and the theory-driven approach, especially with respect to their capacities in covering the domain of values and with respect to the representation of important values in a useful structure. We refute the critique by Schwartz that the lexical approach is superfluous, because his theory "toward universals in values" would already cover all values, and that their mutual relationships are relevant to individuals around the globe. We explain the necessity and strength of the lexical approach in taxonomizing the value domain, both within and across languages. Furthermore, we argue that principal components analysis (PCA) and simultaneous component analysis (SCA) are most adequate in arriving at a satisfactory structuring of the great many values in terms of both underlying constructs and their facets. We point to a misrepresentation in Schwartz's circular model, and we review some misunderstandings on the side of Schwartz with respect to our results in comparison with those proceeding from his circular model.
Vasudevan, Rama K; Tselev, Alexander; Baddorf, Arthur P; Kalinin, Sergei V
2014-10-28
Reflection high energy electron diffraction (RHEED) has by now become a standard tool for in situ monitoring of film growth by pulsed laser deposition and molecular beam epitaxy. Yet despite the widespread adoption and wealth of information in RHEED images, most applications are limited to observing intensity oscillations of the specular spot, and much additional information on growth is discarded. With ease of data acquisition and increased computation speeds, statistical methods to rapidly mine the data set are now feasible. Here, we develop such an approach to the analysis of the fundamental growth processes through multivariate statistical analysis of a RHEED image sequence. This approach is illustrated for growth of La(x)Ca(1-x)MnO(3) films grown on etched (001) SrTiO(3) substrates, but is universal. The multivariate methods including principal component analysis and k-means clustering provide insight into the relevant behaviors, the timing and nature of a disordered to ordered growth change, and highlight statistically significant patterns. Fourier analysis yields the harmonic components of the signal and allows separation of the relevant components and baselines, isolating the asymmetric nature of the step density function and the transmission spots from the imperfect layer-by-layer (LBL) growth. These studies show the promise of big data approaches to obtaining more insight into film properties during and after epitaxial film growth. Furthermore, these studies open the pathway to use forward prediction methods to potentially allow significantly more control over growth process and hence final film quality.
Compliance analysis of a 3-DOF spindle head by considering gravitational effects
NASA Astrophysics Data System (ADS)
Li, Qi; Wang, Manxin; Huang, Tian; Chetwynd, Derek G.
2015-01-01
The compliance modeling is one of the most significant issues in the stage of preliminary design for parallel kinematic machine(PKM). The gravity ignored in traditional compliance analysis has a significant effect on pose accuracy of tool center point(TCP) when a PKM is horizontally placed. By taking gravity into account, this paper presents a semi-analytical approach for compliance analysis of a 3-DOF spindle head named the A3 head. The architecture behind the A3 head is a 3-R PS parallel mechanism having one translational and two rotational movement capabilities, which can be employed to form the main body of a 5-DOF hybrid kinematic machine especially designed for high-speed machining of large aircraft components. The force analysis is carried out by considering both the externally applied wrench imposed upon the platform as well as gravity of all moving components. Then, the deflection analysis is investigated to establish the relationship between the deflection twist and compliances of all joints and links using semi-analytical method. The merits of this approach lie in that platform deflection twist throughout the entire task workspace can be evaluated in a very efficient manner. The effectiveness of the proposed approach is verified by the FEA and experiment at different configurations and the results show that the discrepancy of the compliances is less than 0.04 μm/N-1 and that of the deformations is less than 10μm. The computational and experimental results show that the deflection twist induced by gravity forces of the moving components has significant bearings on pose accuracy of the platform, providing an informative guidance for the improvement of the current design. The proposed approach can be easily applied to the compliance analysis of PKM by considering gravitational effects and to evaluate the deformation caused by gravity throughout the entire workspace.
A reduced order, test verified component mode synthesis approach for system modeling applications
NASA Astrophysics Data System (ADS)
Butland, Adam; Avitabile, Peter
2010-05-01
Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.
Artifact removal in the context of group ICA: a comparison of single-subject and group approaches
Du, Yuhui; Allen, Elena A.; He, Hao; Sui, Jing; Wu, Lei; Calhoun, Vince D.
2018-01-01
Independent component analysis (ICA) has been widely applied to identify intrinsic brain networks from fMRI data. Group ICA computes group-level components from all data and subsequently estimates individual-level components to recapture inter-subject variability. However, the best approach to handle artifacts, which may vary widely among subjects, is not yet clear. In this work, we study and compare two ICA approaches for artifacts removal. One approach, recommended in recent work by the Human Connectome Project, first performs ICA on individual subject data to remove artifacts, and then applies a group ICA on the cleaned data from all subjects. We refer to this approach as Individual ICA based artifacts Removal Plus Group ICA (IRPG). A second proposed approach, called Group Information Guided ICA (GIG-ICA), performs ICA on group data, then removes the group-level artifact components, and finally performs subject-specific ICAs using the group-level non-artifact components as spatial references. We used simulations to evaluate the two approaches with respect to the effects of data quality, data quantity, variable number of sources among subjects, and spatially unique artifacts. Resting-state test-retest datasets were also employed to investigate the reliability of functional networks. Results from simulations demonstrate GIG-ICA has greater performance compared to IRPG, even in the case when single-subject artifacts removal is perfect and when individual subjects have spatially unique artifacts. Experiments using test-retest data suggest that GIG-ICA provides more reliable functional networks. Based on high estimation accuracy, ease of implementation, and high reliability of functional networks, we find GIG-ICA to be a promising approach. PMID:26859308
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
Fan, Feiyi; Yan, Yuepeng; Tang, Yongzhong; Zhang, Hao
2017-12-01
Monitoring pulse oxygen saturation (SpO 2 ) and heart rate (HR) using photoplethysmography (PPG) signal contaminated by a motion artifact (MA) remains a difficult problem, especially when the oximeter is not equipped with a 3-axis accelerometer for adaptive noise cancellation. In this paper, we report a pioneering investigation on the impact of altering the frame length of Molgedey and Schuster independent component analysis (ICAMS) on performance, design a multi-classifier fusion strategy for selecting the PPG correlated signal component, and propose a novel approach to extract SpO 2 and HR readings from PPG signal contaminated by strong MA interference. The algorithm comprises multiple stages, including dual frame length ICAMS, a multi-classifier-based PPG correlated component selector, line spectral analysis, tree-based HR monitoring, and post-processing. Our approach is evaluated by multi-subject tests. The root mean square error (RMSE) is calculated for each trial. Three statistical metrics are selected as performance evaluation criteria: mean RMSE, median RMSE and the standard deviation (SD) of RMSE. The experimental results demonstrate that a shorter ICAMS analysis window probably results in better performance in SpO 2 estimation. Notably, the designed multi-classifier signal component selector achieved satisfactory performance. The subject tests indicate that our algorithm outperforms other baseline methods regarding accuracy under most criteria. The proposed work can contribute to improving the performance of current pulse oximetry and personal wearable monitoring devices. Copyright © 2017 Elsevier Ltd. All rights reserved.
RECENT APPLICATIONS OF SOURCE APPORTIONMENT METHODS AND RELATED NEEDS
Traditional receptor modeling studies have utilized factor analysis (like principal component analysis, PCA) and/or Chemical Mass Balance (CMB) to assess source influences. The limitations with these approaches is that PCA is qualitative and CMB requires the input of source pr...
NASA Astrophysics Data System (ADS)
Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir
2017-06-01
This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.
Marini, Federico; de Beer, Dalene; Walters, Nico A; de Villiers, André; Joubert, Elizabeth; Walczak, Beata
2017-03-17
An ultimate goal of investigations of rooibos plant material subjected to different stages of fermentation is to identify the chemical changes taking place in the phenolic composition, using an untargeted approach and chromatographic fingerprints. Realization of this goal requires, among others, identification of the main components of the plant material involved in chemical reactions during the fermentation process. Quantitative chromatographic data for the compounds for extracts of green, semi-fermented and fermented rooibos form the basis of preliminary study following a targeted approach. The aim is to estimate whether treatment has a significant effect based on all quantified compounds and to identify the compounds, which contribute significantly to it. Analysis of variance is performed using modern multivariate methods such as ANOVA-Simultaneous Component Analysis, ANOVA - Target Projection and regularized MANOVA. This study is the first one in which all three approaches are compared and evaluated. For the data studied, all tree methods reveal the same significance of the fermentation effect on the extract compositions, but they lead to its different interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.
Probabilistic Aeroelastic Analysis of Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.
2004-01-01
A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.
NASA Astrophysics Data System (ADS)
Bunget, Gheorghe; Tilmon, Brevin; Yee, Andrew; Stewart, Dylan; Rogers, James; Webster, Matthew; Farinholt, Kevin; Friedersdorf, Fritz; Pepi, Marc; Ghoshal, Anindya
2018-04-01
Widespread damage in aging aircraft is becoming an increasing concern as both civil and military fleet operators are extending the service lifetime of their aircraft. Metallic components undergoing variable cyclic loadings eventually fatigue and form dislocations as precursors to ultimate failure. In order to characterize the progression of fatigue damage precursors (DP), the acoustic nonlinearity parameter is measured as the primary indicator. However, using proven standard ultrasonic technology for nonlinear measurements presents limitations for settings outside of the laboratory environment. This paper presents an approach for ultrasonic inspection through automated immersion scanning of hot section engine components where mature ultrasonic technology is used during periodic inspections. Nonlinear ultrasonic measurements were analyzed using wavelet analysis to extract multiple harmonics from the received signals. Measurements indicated strong correlations of nonlinearity coefficients and levels of fatigue in aluminum and Ni-based superalloys. This novel wavelet cross-correlation (WCC) algorithm is a potential technique to scan for fatigue damage precursors and identify critical locations for remaining life prediction.
Model-free fMRI group analysis using FENICA.
Schöpf, V; Windischberger, C; Robinson, S; Kasess, C H; Fischmeister, F PhS; Lanzenberger, R; Albrecht, J; Kleemann, A M; Kopietz, R; Wiesmann, M; Moser, E
2011-03-01
Exploratory analysis of functional MRI data allows activation to be detected even if the time course differs from that which is expected. Independent Component Analysis (ICA) has emerged as a powerful approach, but current extensions to the analysis of group studies suffer from a number of drawbacks: they can be computationally demanding, results are dominated by technical and motion artefacts, and some methods require that time courses be the same for all subjects or that templates be defined to identify common components. We have developed a group ICA (gICA) method which is based on single-subject ICA decompositions and the assumption that the spatial distribution of signal changes in components which reflect activation is similar between subjects. This approach, which we have called Fully Exploratory Network Independent Component Analysis (FENICA), identifies group activation in two stages. ICA is performed on the single-subject level, then consistent components are identified via spatial correlation. Group activation maps are generated in a second-level GLM analysis. FENICA is applied to data from three studies employing a wide range of stimulus and presentation designs. These are an event-related motor task, a block-design cognition task and an event-related chemosensory experiment. In all cases, the group maps identified by FENICA as being the most consistent over subjects correspond to task activation. There is good agreement between FENICA results and regions identified in prior GLM-based studies. In the chemosensory task, additional regions are identified by FENICA and temporal concatenation ICA that we show is related to the stimulus, but exhibit a delayed response. FENICA is a fully exploratory method that allows activation to be identified without assumptions about temporal evolution, and isolates activation from other sources of signal fluctuation in fMRI. It has the advantage over other gICA methods that it is computationally undemanding, spotlights components relating to activation rather than artefacts, allows the use of familiar statistical thresholding through deployment of a higher level GLM analysis and can be applied to studies where the paradigm is different for all subjects. Copyright © 2010 Elsevier Inc. All rights reserved.
RF control at SSCL — an object oriented design approach
NASA Astrophysics Data System (ADS)
Dohan, D. A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.
1994-12-01
The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed.
Kuo, Ching-Chang; Ha, Thao; Ebbert, Ashley M.; Tucker, Don M.; Dishion, Thomas J.
2017-01-01
Adolescence is a sensitive period for the development of romantic relationships. During this period the maturation of frontolimbic networks is particularly important for the capacity to regulate emotional experiences. In previous research, both functional magnetic resonance imaging (fMRI) and dense array electroencephalography (dEEG) measures have suggested that responses in limbic regions are enhanced in adolescents experiencing social rejection. In the present research, we examined social acceptance and rejection from romantic partners as they engaged in a Chatroom Interact Task. Dual 128-channel dEEG systems were used to record neural responses to acceptance and rejection from both adolescent romantic partners and unfamiliar peers (N = 75). We employed a two-step temporal principal component analysis (PCA) and spatial independent component analysis (ICA) approach to statistically identify the neural components related to social feedback. Results revealed that the early (288 ms) discrimination between acceptance and rejection reflected by the P3a component was significant for the romantic partner but not the unfamiliar peer. In contrast, the later (364 ms) P3b component discriminated between acceptance and rejection for both partners and peers. The two-step approach (PCA then ICA) was better able than either PCA or ICA alone in separating these components of the brain's electrical activity that reflected both temporal and spatial phases of the brain's processing of social feedback. PMID:28620292
Phenomenological analysis of medical time series with regular and stochastic components
NASA Astrophysics Data System (ADS)
Timashev, Serge F.; Polyakov, Yuriy S.
2007-06-01
Flicker-Noise Spectroscopy (FNS), a general approach to the extraction and parameterization of resonant and stochastic components contained in medical time series, is presented. The basic idea of FNS is to treat the correlation links present in sequences of different irregularities, such as spikes, "jumps", and discontinuities in derivatives of different orders, on all levels of the spatiotemporal hierarchy of the system under study as main information carriers. The tools to extract and analyze the information are power spectra and difference moments (structural functions), which complement the information of each other. The structural function stochastic component is formed exclusively by "jumps" of the dynamic variable while the power spectrum stochastic component is formed by both spikes and "jumps" on every level of the hierarchy. The information "passport" characteristics that are determined by fitting the derived expressions to the experimental variations for the stochastic components of power spectra and structural functions are interpreted as the correlation times and parameters that describe the rate of "memory loss" on these correlation time intervals for different irregularities. The number of the extracted parameters is determined by the requirements of the problem under study. Application of this approach to the analysis of tremor velocity signals for a Parkinsonian patient is discussed.
Shiokawa, Yuka; Date, Yasuhiro; Kikuchi, Jun
2018-02-21
Computer-based technological innovation provides advancements in sophisticated and diverse analytical instruments, enabling massive amounts of data collection with relative ease. This is accompanied by a fast-growing demand for technological progress in data mining methods for analysis of big data derived from chemical and biological systems. From this perspective, use of a general "linear" multivariate analysis alone limits interpretations due to "non-linear" variations in metabolic data from living organisms. Here we describe a kernel principal component analysis (KPCA)-incorporated analytical approach for extracting useful information from metabolic profiling data. To overcome the limitation of important variable (metabolite) determinations, we incorporated a random forest conditional variable importance measure into our KPCA-based analytical approach to demonstrate the relative importance of metabolites. Using a market basket analysis, hippurate, the most important variable detected in the importance measure, was associated with high levels of some vitamins and minerals present in foods eaten the previous day, suggesting a relationship between increased hippurate and intake of a wide variety of vegetables and fruits. Therefore, the KPCA-incorporated analytical approach described herein enabled us to capture input-output responses, and should be useful not only for metabolic profiling but also for profiling in other areas of biological and environmental systems.
Rinaldi, Maurizio; Gindro, Roberto; Barbeni, Massimo; Allegrone, Gianna
2009-01-01
Orange (Citrus sinensis L.) juice comprises a complex mixture of volatile components that are difficult to identify and quantify. Classification and discrimination of the varieties on the basis of the volatile composition could help to guarantee the quality of a juice and to detect possible adulteration of the product. To provide information on the amounts of volatile constituents in fresh-squeezed juices from four orange cultivars and to establish suitable discrimination rules to differentiate orange juices using new chemometric approaches. Fresh juices of four orange cultivars were analysed by headspace solid-phase microextraction (HS-SPME) coupled with GC-MS. Principal component analysis, linear discriminant analysis and heuristic methods, such as neural networks, allowed clustering of the data from HS-SPME analysis while genetic algorithms addressed the problem of data reduction. To check the quality of the results the chemometric techniques were also evaluated on a sample. Thirty volatile compounds were identified by HS-SPME and GC-MS analyses and their relative amounts calculated. Differences in composition of orange juice volatile components were observed. The chosen orange cultivars could be discriminated using neural networks, genetic relocation algorithms and linear discriminant analysis. Genetic algorithms applied to the data were also able to detect the most significant compounds. SPME is a useful technique to investigate orange juice volatile composition and a flexible chemometric approach is able to correctly separate the juices.
Robust-mode analysis of hydrodynamic flows
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.
2017-04-01
The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.
McCarty, James; Parrinello, Michele
2017-11-28
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
NASA Astrophysics Data System (ADS)
McCarty, James; Parrinello, Michele
2017-11-01
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
The impact of seasonal signals on spatio-temporal filtering
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2016-04-01
Existence of Common Mode Errors (CMEs) in permanent GNSS networks contribute to spatial and temporal correlation in residual time series. Time series from permanently observing GNSS stations of distance less than 2 000 km are similarly influenced by such CME sources as: mismodelling (Earth Orientation Parameters - EOP, satellite orbits or antenna phase center variations) during the process of the reference frame realization, large-scale atmospheric and hydrospheric effects as well as small scale crust deformations. Residuals obtained as a result of detrending and deseasonalising of topocentric GNSS time series arranged epoch-by-epoch form an observation matrix independently for each component (North, East, Up). CME is treated as internal structure of the data. Assuming a uniform temporal function across the network it is possible to filter CME out using PCA (Principal Component Analysis) approach. Some of above described CME sources may be reflected as a wide range of frequencies in GPS residual time series. In order to determine an impact of seasonal signals modeling to existence of spatial correlation in network and consequently the results of CME filtration, we chose two ways of modeling. The first approach was commonly presented by previous authors, who modeled with the Least-Squares Estimation (LSE) only annual and semi-annual oscillations. In the second one the set of residuals was a result of modeling of deterministic part that included fortnightly periods plus up to 9th harmonics of Chandlerian, tropical and draconitic oscillations. Correlation coefficients for residuals in parallel with KMO (Kaiser-Meyer-Olkin) statistic and Bartlett's test of sphericity were determined. For this research we used time series expressed in ITRF2008 provided by JPL (Jet Propulsion Laboratory). GPS processing was made using GIPSY-OASIS software in a PPP (Precise Point Positioning) mode. In order to form GPS station network that meet demands of uniform spatial response to the CME we chose 18 stations located in Central Europe. Created network extends up to 1500 kilometers. The KMO statistic indicate whether a component analysis may be useful for a chosen data set. We obtained KMO statistic value of 0.87 and 0.62 for residuals of Up component after first and second approaches were applied, what means that both residuals share common errors. Bartlett's test of sphericity analysis met a requirement that in both cases there are correlations in residuals. Another important results are the eigenvalues expressed as a percentage of the total variance explained by the first few components in PCA. For North, East and Up component we obtain respectively 68%, 75%, 65% and 47%, 54%, 52% after first and second approaches were applied. The results of CME filtration using PCA approach performed on both residual time series influence directly the uncertainty of the velocity of permanent stations. In our case spatial filtering reduces the uncertainty of velocity from 0.5 to 0.8 mm for horizontal components and from 0.6 to 0.9 mm on average for Up component when annual and semi-annual signals were assumed. Nevertheless, while second approach to the deterministic part modelling was used, deterioration of velocity uncertainty was noticed only for Up component, probably due to much higher autocorrelation in the time series when comparing to horizontal components.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Astronomical component estimation (ACE v.1) by time-variant sinusoidal modeling
NASA Astrophysics Data System (ADS)
Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan
2016-09-01
Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on (fast) Fourier transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic can make it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. This drawback is circumvented by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach was proven useful to characterize audio signals (music and speech), which are non-stationary in nature. Paleoclimate proxy signals and audio signals share similar dynamics; the only difference is the frequency relationship between the different components. A harmonic-frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, this difference is irrelevant for the problem of separating simultaneous changes in amplitude and frequency. Using an approach with overlapping analysis frames, the model (Astronomical Component Estimation, version 1: ACE v.1) captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretations, whereas the latter are estimated by means of linear least-squares. As output, the model provides the orbital component waveform, either in the depth or time domain. Uncertainty analyses of the model estimates are performed using Monte Carlo simulations. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns reconstruct changes in accumulation rate, whereas amplitude modulation identifies eccentricity-modulated precession. The functioning of the time-variant sinusoidal model is illustrated and validated using a synthetic insolation signal. The new modeling approach is tested on two case studies: (1) a Pliocene-Pleistocene benthic δ18O record from Ocean Drilling Program (ODP) Site 846 and (2) a Danian magnetic susceptibility record from the Contessa Highway section, Gubbio, Italy.
Age Analysis of Public Library Collections. Final Report.
ERIC Educational Resources Information Center
Wallace, Danny P.; And Others
The use of information regarding the ages of library items is a standard component of many approaches to weeding library collections, and has a long history in the literature of collection management. Current and past approaches to using aging information to make weeding decisions make use of very arbitrary decision criteria. This study examined…
ERIC Educational Resources Information Center
Shen, Hao-Yu; Shen, Bo; Hardacre, Christopher
2013-01-01
A systematic approach to develop the teaching of instrumental analytical chemistry is discussed, as well as a conceptual framework for organizing and executing lectures and a laboratory course. Three main components are used in this course: theoretical knowledge developed in the classroom, simulations via a virtual laboratory, and practical…
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
A Cognitive Component Analysis Approach for Developing Game-Based Spatial Learning Tools
ERIC Educational Resources Information Center
Hung, Pi-Hsia; Hwang, Gwo-Jen; Lee, Yueh-Hsun; Su, I-Hsiang
2012-01-01
Spatial ability has been recognized as one of the most important factors affecting the mathematical performance of students. Previous studies on spatial learning have mainly focused on developing strategies to shorten the problem-solving time of learners for very specific learning tasks. Such an approach usually has limited effects on improving…
A Framework for Integrated Component and System Analyses of Instabilities
NASA Technical Reports Server (NTRS)
Ahuja, Vineet; Erwin, James; Arunajatesan, Srinivasan; Cattafesta, Lou; Liu, Fei
2010-01-01
Instabilities associated with fluid handling and operation in liquid rocket propulsion systems and test facilities usually manifest themselves as structural vibrations or some form of structural damage. While the source of the instability is directly related to the performance of a component such as a turbopump, valve or a flow control element, the associated pressure fluctuations as they propagate through the system have the potential to amplify and resonate with natural modes of the structural elements and components of the system. In this paper, the authors have developed an innovative multi-level approach that involves analysis at the component and systems level. The primary source of the unsteadiness is modeled with a high-fidelity hybrid RANS/LES based CFD methodology that has been previously used to study instabilities in feed systems. This high fidelity approach is used to quantify the instability and understand the physics associated with the instability. System response to the driving instability is determined through a transfer matrix approach wherein the incoming and outgoing pressure and velocity fluctuations are related through a transfer (or transmission) matrix. The coefficients of the transfer matrix for each component (i.e. valve, pipe, orifice etc.) are individually derived from the flow physics associated with the component. A demonstration case representing a test loop/test facility comprised of a network of elements is constructed with the transfer matrix approach and the amplification of modes analyzed as the instability propagates through the test loop.
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1995-01-01
A major difficulty in designing aeropropulsion systems is that of identifying and understanding the interactions between the separate engine components and disciplines (e.g., fluid mechanics, structural mechanics, heat transfer, material properties, etc.). The traditional analysis approach is to decompose the system into separate components with the interaction between components being evaluated by the application of each of the single disciplines in a sequential manner. Here, one discipline uses information from the calculation of another discipline to determine the effects of component coupling. This approach, however, may not properly identify the consequences of these effects during the design phase, leaving the interactions to be discovered and evaluated during engine testing. This contributes to the time and cost of developing new propulsion systems as, typically, several design-build-test cycles are needed to fully identify multidisciplinary effects and reach the desired system performance. The alternative to sequential isolated component analysis is to use multidisciplinary coupling at a more fundamental level. This approach has been made more plausible due to recent advancements in computation simulation along with application of concurrent engineering concepts. Computer simulation systems designed to provide an environment which is capable of integrating the various disciplines into a single simulation system have been proposed and are currently being developed. One such system is being developed by the Numerical Propulsion System Simulation (NPSS) project. The NPSS project, being developed at the Interdisciplinary Technology Office at the NASA Lewis Research Center is a 'numerical test cell' designed to provide for comprehensive computational design and analysis of aerospace propulsion systems. It will provide multi-disciplinary analyses on a variety of computational platforms, and a user-interface consisting of expert systems, data base management and visualization tools, to allow the designer to investigate the complex interactions inherent in these systems. An interactive programming software system, known as the Application Visualization System (AVS), was utilized for the development of the propulsion system simulation. The modularity of this system provides the ability to couple propulsion system components, as well as disciplines, and provides for the ability to integrate existing, well established analysis codes into the overall system simulation. This feature allows the user to customize the simulation model by inserting desired analysis codes. The prototypical simulation environment for multidisciplinary analysis, called Turbofan Engine System Simulation (TESS), which incorporates many of the characteristics of the simulation environment proposed herein, is detailed.
NASA Astrophysics Data System (ADS)
Pelizardi, Flavia; Bea, Sergio A.; Carrera, Jesús; Vives, Luis
2017-07-01
Mixing calculations (i.e., the calculation of the proportions in which end-members are mixed in a sample) are essential for hydrological research and water management. However, they typically require the use of conservative species, a condition that may be difficult to meet due to chemical reactions. Mixing calculation also require identifying end-member waters, which is usually achieved through End Member Mixing Analysis (EMMA). We present a methodology to help in the identification of both end-members and such reactions, so as to improve mixing ratio calculations. The proposed approach consists of: (1) identifying the potential chemical reactions with the help of EMMA; (2) defining decoupled conservative chemical components consistent with those reactions; (3) repeat EMMA with the decoupled (i.e., conservative) components, so as to identify end-members waters; and (4) computing mixing ratios using the new set of components and end-members. The approach is illustrated by application to two synthetic mixing examples involving mineral dissolution and cation exchange reactions. Results confirm that the methodology can be successfully used to identify geochemical processes affecting the mixtures, thus improving the accuracy of mixing ratios calculations and relaxing the need for conservative species.
Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli
2006-09-01
A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.
Directional connectivity of resting state human fMRI data using cascaded ICA-PDC analysis.
Silfverhuth, Minna J; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Veijola, Juha; Tervonen, Osmo; Kiviniemi, Vesa
2011-11-01
Directional connectivity measures, such as partial directed coherence (PDC), give us means to explore effective connectivity in the human brain. By utilizing independent component analysis (ICA), the original data-set reduction was performed for further PDC analysis. To test this cascaded ICA-PDC approach in causality studies of human functional magnetic resonance imaging (fMRI) data. Resting state group data was imaged from 55 subjects using a 1.5 T scanner (TR 1800 ms, 250 volumes). Temporal concatenation group ICA in a probabilistic ICA and further repeatability runs (n = 200) were overtaken. The reduced data-set included the time series presentation of the following nine ICA components: secondary somatosensory cortex, inferior temporal gyrus, intracalcarine cortex, primary auditory cortex, amygdala, putamen and the frontal medial cortex, posterior cingulate cortex and precuneus, comprising the default mode network components. Re-normalized PDC (rPDC) values were computed to determine directional connectivity at the group level at each frequency. The integrative role was suggested for precuneus while the role of major divergence region may be proposed to primary auditory cortex and amygdala. This study demonstrates the potential of the cascaded ICA-PDC approach in directional connectivity studies of human fMRI.
NASA Astrophysics Data System (ADS)
Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément
2015-07-01
3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.
Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...
2017-06-01
Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less
Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
Hardisty, Frank; Robinson, Anthony C.
2010-01-01
In this paper we present the GeoViz Toolkit, an open-source, internet-delivered program for geographic visualization and analysis that features a diverse set of software components which can be flexibly combined by users who do not have programming expertise. The design and architecture of the GeoViz Toolkit allows us to address three key research challenges in geovisualization: allowing end users to create their own geovisualization and analysis component set on-the-fly, integrating geovisualization methods with spatial analysis methods, and making geovisualization applications sharable between users. Each of these tasks necessitates a robust yet flexible approach to inter-tool coordination. The coordination strategy we developed for the GeoViz Toolkit, called Introspective Observer Coordination, leverages and combines key advances in software engineering from the last decade: automatic introspection of objects, software design patterns, and reflective invocation of methods. PMID:21731423
Study of advanced techniques for determining the long-term performance of components
NASA Technical Reports Server (NTRS)
1972-01-01
A study was conducted of techniques having the capability of determining the performance and reliability of components for spacecraft liquid propulsion applications for long term missions. The study utilized two major approaches; improvement in the existing technology, and the evolution of new technology. The criteria established and methods evolved are applicable to valve components. Primary emphasis was placed on the propellants oxygen difluoride and diborane combination. The investigation included analysis, fabrication, and tests of experimental equipment to provide data and performance criteria.
Dynamics of Rotating Multi-component Turbomachinery Systems
NASA Technical Reports Server (NTRS)
Lawrence, Charles
1993-01-01
The ultimate objective of turbomachinery vibration analysis is to predict both the overall, as well as component dynamic response. To accomplish this objective requires complete engine structural models, including multistages of bladed disk assemblies, flexible rotor shafts and bearings, and engine support structures and casings. In the present approach each component is analyzed as a separate structure and boundary information is exchanged at the inter-component connections. The advantage of this tactic is that even though readily available detailed component models are utilized, accurate and comprehensive system response information may be obtained. Sample problems, which include a fixed base rotating blade and a blade on a flexible rotor, are presented.
Time to contact and the control of manual prehension.
Watson, M K; Jakobson, L S
1997-11-01
In the present study, a kinematic analysis was made of unconstrained, natural prehension movements directed toward an object approaching the observer on a conveyor belt at one of three constant velocities, from one of three different directions (head-on or along the fronto-parallel plane coming either from the subject's left or right). Subjects were required to grasp the object when it reached a target located 20 cm directly in front of the hand's start position. The kinematic analysis revealed that both the transport and grasp components of the movement changed in response to the experimental manipulations, but did so in a manner that guaranteed that, for objects approaching from a given direction, hand closure would begin at a constant time prior to object contact (regardless of the object's approach speed). The kinematic analysis also revealed, however, that the onset of hand closure began earlier with objects approaching from the right than from other directions -- an effect which would not be predicted if time to contact was the key variable controlling the onset of hand closure. These results, then, lend only partial support to the theory that temporal coordination between the transport and grasp components of prehension is ensured through their common dependence on time to contact information.
Aerodynamic Analysis of the Truss-Braced Wing Aircraft Using Vortex-Lattice Superposition Approach
NASA Technical Reports Server (NTRS)
Ting, Eric Bi-Wen; Reynolds, Kevin Wayne; Nguyen, Nhan T.; Totah, Joseph J.
2014-01-01
The SUGAR Truss-BracedWing (TBW) aircraft concept is a Boeing-developed N+3 aircraft configuration funded by NASA ARMD FixedWing Project. This future generation transport aircraft concept is designed to be aerodynamically efficient by employing a high aspect ratio wing design. The aspect ratio of the TBW is on the order of 14 which is significantly greater than those of current generation transport aircraft. This paper presents a recent aerodynamic analysis of the TBW aircraft using a conceptual vortex-lattice aerodynamic tool VORLAX and an aerodynamic superposition approach. Based on the underlying linear potential flow theory, the principle of aerodynamic superposition is leveraged to deal with the complex aerodynamic configuration of the TBW. By decomposing the full configuration of the TBW into individual aerodynamic lifting components, the total aerodynamic characteristics of the full configuration can be estimated from the contributions of the individual components. The aerodynamic superposition approach shows excellent agreement with CFD results computed by FUN3D, USM3D, and STAR-CCM+.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty
NASA Astrophysics Data System (ADS)
Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang
2016-12-01
Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.
[Relevance and validity of a new French composite index to measure poverty on a geographical level].
Challier, B; Viel, J F
2001-02-01
A number of disease conditions are influenced by deprivation. Geographical measurement of deprivation can provide an independent contribution to individual measures by accounting for the social context. Such a geographical approach, based on deprivation indices, is classical in Great Britain but scarcely used in France. The objective of this work was to build and validate an index readily usable in French municipalities and cantons. Socioeconomic data (unemployment, occupations, housing specifications, income, etc.) were derived from the 1990 census of municipalities and cantons in the Doubs departement. A new index was built by principal components analysis on the municipality data. The validity of the new index was checked and tested for correlations with British deprivation indices. Principal components analysis on municipality data identified four components (explaining 76% of the variance). Only the first component (CP1 explaining 42% of the variance) was retained. Content validity (wide choice of potential deprivation items, correlation between items and CP1: 0.52 to 0.96) and construct validity (CP1 socially relevant; Cronbach's alpha=0.91; correlation between CP1 and three out of four British indices ranging from 0.73 to 0.88) were sufficient. Analysis on canton data supported that on municipality data. The validation of the new index being satisfactory, the user will have to make a choice. The new index, CP1, is closer to the local background and was derived from data from a French departement. It is therefore better adapted to more descriptive approaches such as health care planning. To examine the relationship between deprivation and health with a more etiological approach, the British indices (anteriority, international comparisons) would be more appropriate, but CP1, once validated in various health problem situations, should be most useful for French studies.
SLS Navigation Model-Based Design Approach
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed
A Two-Step Approach to Analyze Satisfaction Data
ERIC Educational Resources Information Center
Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.
2011-01-01
In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…
Alternative Instructional Strategies in an IS Curriculum
ERIC Educational Resources Information Center
Parker, Kevin R.; LeRouge, Cynthia; Trimmer, Ken
2005-01-01
Systems Analysis and Design is a core component of an education in information systems. To appeal to a wider range of constituents and facilitate the learning process, the content of a traditional Systems Analysis and Design course has been supplemented with an alternative modeling approach. This paper presents an instructional design that…
Opportunities for Applied Behavior Analysis in the Total Quality Movement.
ERIC Educational Resources Information Center
Redmon, William K.
1992-01-01
This paper identifies critical components of recent organizational quality improvement programs and specifies how applied behavior analysis can contribute to quality technology. Statistical Process Control and Total Quality Management approaches are compared, and behavior analysts are urged to build their research base and market behavior change…
Approaches to the Analysis of School Costs, an Introduction.
ERIC Educational Resources Information Center
Payzant, Thomas
A review and general discussion of quantitative and qualitative techniques for the analysis of economic problems outside of education is presented to help educators discover new tools for planning, allocating, and evaluating educational resources. The pamphlet covers some major components of cost accounting, cost effectiveness, cost-benefit…
Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar
2017-06-01
We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gu, Chaochao; Gao, Pin; Yang, Fan; An, Dongxuan; Munir, Mariya; Jia, Hanzhong; Xue, Gang; Ma, Chunyan
2017-05-01
The presence of antibiotic residues in the environment has been regarded as an emerging concern due to their potential adverse environmental consequences such as antibiotic resistance. However, the interaction between antibiotics and extracellular polymeric substances (EPSs) of biofilms in wastewater treatment systems is not entirely clear. In this study, the effect of ciprofloxacin (CIP) antibiotic on biofilm EPS matrix was investigated and characterized using fluorescence excitation-emission matrix (EEM) and parallel factor (PARAFAC) analysis. Physicochemical analysis showed that the proteins were the major EPS fraction, and their contents increased gradually with an increase in CIP concentration (0-300 μg/L). Based on the characterization of biofilm tightly bound EPS (TB-EPS) by EEM, three fluorescent components were identified by PARAFAC analysis. Component C1 was associated with protein-like substances, and components C2 and C3 belonged to humic-like substances. Component C1 exhibited an increasing trend as the CIP addition increased. Pearson's correlation results showed that CIP correlated significantly with the protein contents and component C1, while strong correlations were also found among UV 254 , dissolved organic carbon, humic acids, and component C3. A combined use of EEM-PARAFAC analysis and chemical measurements was demonstrated as a favorable approach for the characterization of variations in biofilm EPS in the presence of CIP antibiotic.
Sublaminate analysis of interlaminar fracture in composites
NASA Technical Reports Server (NTRS)
Armanios, E. A.; Rehfield, L. W.
1986-01-01
A simple analysis method based upon a transverse shear deformation theory and a sublaminate approach is utilized to analyze a mixed-mode edge delamination specimen. The analysis provides closed form expressions for the interlaminar shear stresses ahead of the crack, the total energy release rate, and the energy release rate components. The parameters controlling the behavior are identified. The effect of specimen stacking sequence and delamination interface on the strain energy release rate components is investigated. Results are compared with a finite element simulation for reference. The simple nature of the method makes it suitable for preliminary design analyses which require a large number of configurations to be evaluated quickly and economically.
Tailored multivariate analysis for modulated enhanced diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni
2015-10-21
Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. The multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). When applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. To develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
ERIC Educational Resources Information Center
Mena, Juanjo; García, Marisa; Clarke, Anthony; Barkatsas, Anastasios
2016-01-01
Mentoring in Teacher Education is a key component in the professional development of student teachers. However, little research focuses on the knowledge shared and generated in mentoring conversations. In this paper, we explore the knowledge student teachers articulate in mentoring conversations under three different post-lesson approaches to…
ERIC Educational Resources Information Center
Howard, James E.
After identifying the components of a community college financial system as enrollment, costs, revenues and tuition, this paper addresses the need for a system dynamics analysis of a California community college district. This systems approach would assess the possible effects of alternative policies on the characteristic behavior modes of the…
Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A
2000-06-01
After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.
A case study in nonconformance and performance trend analysis
NASA Technical Reports Server (NTRS)
Maloy, Joseph E.; Newton, Coy P.
1990-01-01
As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.
NASA Astrophysics Data System (ADS)
Mahmoudishadi, S.; Malian, A.; Hosseinali, F.
2017-09-01
The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Zeng, Yi; Land, Kenneth C.; Wang, Zhenglian; Gu, Danan
2012-01-01
This article presents the core methodological ideas, empirical assessments, and applications of an extended cohort-component approach (known as the “ProFamy model”) to simultaneously project household composition, living arrangements, and population sizes at the subnational level in the United States. Comparisons of projections from 1990 to 2000 using this approach with census counts in 2000 for each of the 50 states and Washington, DC show that 68.0 %, 17.0 %, 11.2 %, and 3.8 % of the absolute percentage errors are <3.0 %, 3.0 % to 4.99 %, 5.0 % to 9.99 %, and ≥10.0 %, respectively. Another analysis compares average forecast errors between the extended cohort-component approach and the still widely used classic headship-rate method, by projecting number-of-bedrooms–specific housing demands from 1990 to 2000 and then comparing those projections with census counts in 2000 for each of the 50 states and Washington, DC. The results demonstrate that, compared with the extended cohort-component approach, the headship-rate method produces substantially more serious forecast errors because it cannot project households by size while the extended cohort-component approach projects detailed household sizes. We also present illustrative household and living arrangement projections for the five decades from 2000 to 2050, with medium-, small-, and large-family scenarios for each of the 50 states; Washington, DC; six counties of southern California, and the Minneapolis–St. Paul metropolitan area. Among many interesting numerical outcomes of household and living arrangement projections with medium, low, and high bounds, the aging of American households over the next few decades across all states/areas is particularly striking. Finally, the limitations of the present study and potential future lines of research are discussed. PMID:23208782
Pandžić, Elvis; Abu-Arish, Asmahan; Whan, Renee M; Hanrahan, John W; Wiseman, Paul W
2018-02-16
Molecular, vesicular and organellar flows are of fundamental importance for the delivery of nutrients and essential components used in cellular functions such as motility and division. With recent advances in fluorescence/super-resolution microscopy modalities we can resolve the movements of these objects at higher spatio-temporal resolutions and with better sensitivity. Previously, spatio-temporal image correlation spectroscopy has been applied to map molecular flows by correlation analysis of fluorescence fluctuations in image series. However, an underlying assumption of this approach is that the sampled time windows contain one dominant flowing component. Although this was true for most of the cases analyzed earlier, in some situations two or more different flowing populations can be present in the same spatio-temporal window. We introduce an approach, termed velocity landscape correlation (VLC), which detects and extracts multiple flow components present in a sampled image region via an extension of the correlation analysis of fluorescence intensity fluctuations. First we demonstrate theoretically how this approach works, test the performance of the method with a range of computer simulated image series with varying flow dynamics. Finally we apply VLC to study variable fluxing of STIM1 proteins on microtubules connected to the plasma membrane of Cystic Fibrosis Bronchial Epithelial (CFBE) cells. Copyright © 2018 Elsevier Inc. All rights reserved.
Xu, J; Durand, L G; Pibarot, P
2000-10-01
This paper describes a new approach based on the time-frequency representation of transient nonlinear chirp signals for modeling the aortic (A2) and the pulmonary (P2) components of the second heart sound (S2). It is demonstrated that each component is a narrow-band signal with decreasing instantaneous frequency defined by its instantaneous amplitude and its instantaneous phase. Each component is also a polynomial phase signal, the instantaneous phase of which can be accurately represented by a polynomial having an order of thirty. A dechirping approach is used to obtain the instantaneous amplitude of each component while reducing the effect of the background noise. The analysis-synthesis procedure is applied to 32 isolated A2 and 32 isolated P2 components recorded in four pigs with pulmonary hypertension. The mean +/- standard deviation of the normalized root-mean-squared error (NRMSE) and the correlation coefficient (rho) between the original and the synthesized signal components were: NRMSE = 2.1 +/- 0.3% and rho = 0.97 +/- 0.02 for A2 and NRMSE = 2.52 +/- 0.5% and rho = 0.96 +/- 0.02 for P2. These results confirm that each component can be modeled as mono-component nonlinear chirp signals of short duration with energy distributions concentrated along its decreasing instantaneous frequency.
Turbopump Design and Analysis Approach for Nuclear Thermal Rockets
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng S.; Veres, Joseph P.; Fittje, James E.
2006-01-01
A rocket propulsion system, whether it is a chemical rocket or a nuclear thermal rocket, is fairly complex in detail but rather simple in principle. Among all the interacting parts, three components stand out: they are pumps and turbines (turbopumps), and the thrust chamber. To obtain an understanding of the overall rocket propulsion system characteristics, one starts from analyzing the interactions among these three components. It is therefore of utmost importance to be able to satisfactorily characterize the turbopump, level by level, at all phases of a vehicle design cycle. Here at NASA Glenn Research Center, as the starting phase of a rocket engine design, specifically a Nuclear Thermal Rocket Engine design, we adopted the approach of using a high level system cycle analysis code (NESS) to obtain an initial analysis of the operational characteristics of a turbopump required in the propulsion system. A set of turbopump design codes (PumpDes and TurbDes) were then executed to obtain sizing and performance characteristics of the turbopump that were consistent with the mission requirements. A set of turbopump analyses codes (PUMPA and TURBA) were applied to obtain the full performance map for each of the turbopump components; a two dimensional layout of the turbopump based on these mean line analyses was also generated. Adequacy of the turbopump conceptual design will later be determined by further analyses and evaluation. In this paper, descriptions and discussions of the aforementioned approach are provided and future outlooks are discussed.
NASA Astrophysics Data System (ADS)
Molina-Aguilera, A.; Mancilla, F. D. L.; Julià, J.; Morales, J.
2017-12-01
Joint inversion techniques of P-receiver functions and wave dispersion data implicitly assume an isotropic radial stratified earth. The conventional approach invert stacked radial component receiver functions from different back-azimuths to obtain a laterally homogeneous single-velocity model. However, in the presence of strong lateral heterogeneities as anisotropic layers and/or dipping interfaces, receiver functions are considerably perturbed and both the radial and transverse components exhibit back azimuthal dependences. Harmonic analysis methods exploit these azimuthal periodicities to separate the effects due to the isotropic flat-layered structure from those effects caused by lateral heterogeneities. We implement a harmonic analysis method based on radial and transverse receiver functions components and carry out a synthetic study to illuminate the capabilities of the method in isolating the isotropic flat-layered part of receiver functions and constrain the geometry and strength of lateral heterogeneities. The independent of the baz P receiver function are jointly inverted with phase and group dispersion curves using a linearized inversion procedure. We apply this approach to high dense seismic profiles ( 2 km inter-station distance, see figure) located in the central Betics (western Mediterranean region), a region which has experienced complex geodynamic processes and exhibit strong variations in Moho topography. The technique presented here is robust and can be applied systematically to construct a 3-D model of the crust and uppermost mantle across large networks.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Wang, R.; Hurysz, B.; Luo, Z.
1991-01-01
The goal is to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom.The approach consists of four steps: (1) develop analytical tools (models and computer programs); (2) conduct parameterization studies; (3) predict the global space station EMI environment; and (4) provide a basis for modification of EMI standards.
Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.
de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard
2018-02-01
Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.
Longitudinal analysis of bioaccumulative contaminants in freshwater fishes
Sun, Jielun; Kim, Y.; Schmitt, C.J.
2003-01-01
The National Contaminant Biomonitoring Program (NCBP) was initiated in 1967 as a component of the National Pesticide Monitoring program. It consists of periodic collection of freshwater fish and other samples and the analysis of the concentrations of persistent environmental contaminants in these samples. For the analysis, the common approach has been to apply the mixed two-way ANOVA model to combined data. A main disadvantage of this method is that it cannot give a detailed temporal trend of the concentrations since the data are grouped. In this paper, we present an alternative approach that performs a longitudinal analysis of the information using random effects models. In the new approach, no grouping is needed and the data are treated as samples from continuous stochastic processes, which seems more appropriate than ANOVA for the problem.
Reliability analysis of component-level redundant topologies for solid-state fault current limiter
NASA Astrophysics Data System (ADS)
Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam
2018-04-01
Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.
Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping
2016-02-11
Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.
Demonstration of a Safety Analysis on a Complex System
NASA Technical Reports Server (NTRS)
Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey;
1997-01-01
For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.
Evaluation of RCAS Inflow Models for Wind Turbine Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tangler, J.; Bir, G.
The finite element structural modeling in the Rotorcraft Comprehensive Analysis System (RCAS) provides a state-of-the-art approach to aeroelastic analysis. This, coupled with its ability to model all turbine components, results in a methodology that can simulate complex system interactions characteristic of large wind. In addition, RCAS is uniquely capable of modeling advanced control algorithms and the resulting dynamic responses.
Computer-Aided Design of Low-Noise Microwave Circuits
NASA Astrophysics Data System (ADS)
Wedge, Scott William
1991-02-01
Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Shear-wave velocity profiling according to three alternative approaches: A comparative case study
NASA Astrophysics Data System (ADS)
Dal Moro, G.; Keller, L.; Al-Arifi, N. S.; Moustafa, S. S. R.
2016-11-01
The paper intends to compare three different methodologies which can be used to analyze surface-wave propagation, thus eventually obtaining the vertical shear-wave velocity (VS) profile. The three presented methods (currently still quite unconventional) are characterized by different field procedures and data processing. The first methodology is a sort of evolution of the classical Multi-channel Analysis of Surface Waves (MASW) here accomplished by jointly considering Rayleigh and Love waves (analyzed according to the Full Velocity Spectrum approach) and the Horizontal-to-Vertical Spectral Ratio (HVSR). The second method is based on the joint analysis of the HVSR curve together with the Rayleigh-wave dispersion determined via Miniature Array Analysis of Microtremors (MAAM), a passive methodology that relies on a small number (4 to 6) of vertical geophones deployed along a small circle (for the common near-surface application the radius usually ranges from 0.6 to 5 m). Finally, the third considered approach is based on the active data acquired by a single 3-component geophone and relies on the joint inversion of the group-velocity spectra of the radial and vertical components of the Rayleigh waves, together with the Radial-to-Vertical Spectral Ratio (RVSR). The results of the analyses performed while considering these approaches (completely different both in terms of field procedures and data analysis) appear extremely consistent thus mutually validating their performances. Pros and cons of each approach are summarized both in terms of computational aspects as well as with respect to practical considerations regarding the specific character of the pertinent field procedures.
Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles
NASA Astrophysics Data System (ADS)
Zekavat, Behrooz; Solouki, Touradj
2012-11-01
We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.
Kamstra, Rhiannon L; Floriano, Wely B
2014-11-01
Carbonic anhydrase IX (CAIX) is a biomarker for tumor hypoxia. Fluorescent inhibitors of CAIX have been used to study hypoxic tumor cell lines. However, these inhibitor-based fluorescent probes may have a therapeutic effect that is not appropriate for monitoring treatment efficacy. In the search for novel fluorescent probes that are not based on known inhibitors, a database of 20,860 fluorescent compounds was virtually screened against CAIX using hierarchical virtual ligand screening (HierVLS). The screening database contained 14,862 compounds tagged with the ATTO680 fluorophore plus an additional 5998 intrinsically fluorescent compounds. Overall ranking of compounds to identify hit molecular probe candidates utilized a principal component analysis (PCA) approach. Four potential binding sites, including the catalytic site, were identified within the structure of the protein and targeted for virtual screening. Available sequence information for 23 carbonic anhydrase isoforms was used to prioritize the four sites based on the estimated "uniqueness" of each site in CAIX relative to the other isoforms. A database of 32 known inhibitors and 478 decoy compounds was used to validate the methodology. A receiver-operating characteristic (ROC) analysis using the first principal component (PC1) as predictive score for the validation database yielded an area under the curve (AUC) of 0.92. AUC is interpreted as the probability that a binder will have a better score than a non-binder. The use of first component analysis of binding energies for multiple sites is a novel approach for hit selection. The very high prediction power for this approach increases confidence in the outcome from the fluorescent library screening. Ten of the top scoring candidates for isoform-selective putative binding sites are suggested for future testing as fluorescent molecular probe candidates. Copyright © 2014 Elsevier Inc. All rights reserved.
Rank estimation and the multivariate analysis of in vivo fast-scan cyclic voltammetric data
Keithley, Richard B.; Carelli, Regina M.; Wightman, R. Mark
2010-01-01
Principal component regression has been used in the past to separate current contributions from different neuromodulators measured with in vivo fast-scan cyclic voltammetry. Traditionally, a percent cumulative variance approach has been used to determine the rank of the training set voltammetric matrix during model development, however this approach suffers from several disadvantages including the use of arbitrary percentages and the requirement of extreme precision of training sets. Here we propose that Malinowski’s F-test, a method based on a statistical analysis of the variance contained within the training set, can be used to improve factor selection for the analysis of in vivo fast-scan cyclic voltammetric data. These two methods of rank estimation were compared at all steps in the calibration protocol including the number of principal components retained, overall noise levels, model validation as determined using a residual analysis procedure, and predicted concentration information. By analyzing 119 training sets from two different laboratories amassed over several years, we were able to gain insight into the heterogeneity of in vivo fast-scan cyclic voltammetric data and study how differences in factor selection propagate throughout the entire principal component regression analysis procedure. Visualizing cyclic voltammetric representations of the data contained in the retained and discarded principal components showed that using Malinowski’s F-test for rank estimation of in vivo training sets allowed for noise to be more accurately removed. Malinowski’s F-test also improved the robustness of our criterion for judging multivariate model validity, even though signal-to-noise ratios of the data varied. In addition, pH change was the majority noise carrier of in vivo training sets while dopamine prediction was more sensitive to noise. PMID:20527815
Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tang, Xiaoying
2017-03-01
Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.
Coping with Trial-to-Trial Variability of Event Related Signals: A Bayesian Inference Approach
NASA Technical Reports Server (NTRS)
Ding, Mingzhou; Chen, Youghong; Knuth, Kevin H.; Bressler, Steven L.; Schroeder, Charles E.
2005-01-01
In electro-neurophysiology, single-trial brain responses to a sensory stimulus or a motor act are commonly assumed to result from the linear superposition of a stereotypic event-related signal (e.g. the event-related potential or ERP) that is invariant across trials and some ongoing brain activity often referred to as noise. To extract the signal, one performs an ensemble average of the brain responses over many identical trials to attenuate the noise. To date, h s simple signal-plus-noise (SPN) model has been the dominant approach in cognitive neuroscience. Mounting empirical evidence has shown that the assumptions underlying this model may be overly simplistic. More realistic models have been proposed that account for the trial-to-trial variability of the event-related signal as well as the possibility of multiple differentially varying components within a given ERP waveform. The variable-signal-plus-noise (VSPN) model, which has been demonstrated to provide the foundation for separation and characterization of multiple differentially varying components, has the potential to provide a rich source of information for questions related to neural functions that complement the SPN model. Thus, being able to estimate the amplitude and latency of each ERP component on a trial-by-trial basis provides a critical link between the perceived benefits of the VSPN model and its many concrete applications. In this paper we describe a Bayesian approach to deal with this issue and the resulting strategy is referred to as the differentially Variable Component Analysis (dVCA). We compare the performance of dVCA on simulated data with Independent Component Analysis (ICA) and analyze neurobiological recordings from monkeys performing cognitive tasks.
Deep overbite malocclusion: analysis of the underlying components.
El-Dawlatly, Mostafa M; Fayed, Mona M Salah; Mostafa, Yehya A
2012-10-01
A deepbite malocclusion should not be approached as a disease entity; instead, it should be viewed as a clinical manifestation of underlying discrepancies. The aim of this study was to investigate the various skeletal and dental components of deep bite malocclusion, the significance of the contribution of each, and whether there are certain correlations between them. Dental and skeletal measurements were made on lateral cephalometric radiographs and study models of 124 patients with deepbite. These measurements were statistically analyzed. An exaggerated curve of Spee was the greatest shared dental component (78%), significantly higher than any other component (P = 0.0335). A decreased gonial angle was the greatest shared skeletal component (37.1%), highly significant compared with the other components (P = 0.0019). A strong positive correlation was found between the ramus/Frankfort horizontal angle and the gonial angle; weaker correlations were found between various components. An exaggerated curve of Spee and a decreased gonial angle were the greatest contributing components. This analysis of deepbite components could help clinicians design individualized mechanotherapies based on the underlying cause, rather than being biased toward predetermined mechanics when treating patients with a deepbite malocclusion. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Pöyhönen, Antti; Häkkinen, Jukka T; Koskimäki, Juha; Hakama, Matti; Tammela, Teuvo L J; Auvinen, Anssi
2013-03-01
WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The ICS has divided LUTS into three groups: storage, voiding and post-micturition symptoms. The classification is based on anatomical, physiological and urodynamic considerations of a theoretical nature. We used principal component analysis (PCA) to determine the inter-correlations of various LUTS, which is a novel approach to research and can strengthen existing knowledge of the phenomenology of LUTS. After we had completed our analyses, another study was published that used a similar approach and results were very similar to those of the present study. We evaluated the constellation of LUTS using PCA of the data from a population-based study that included >4000 men. In our analysis, three components emerged from the 12 LUTS: voiding, storage and incontinence components. Our results indicated that incontinence may be separate from the other storage symptoms and post-micturition symptoms should perhaps be regarded as voiding symptoms. To determine how lower urinary tract symptoms (LUTS) relate to each other and assess if the classification proposed by the International Continence Society (ICS) is consistent with empirical findings. The information on urinary symptoms for this population-based study was collected using a self-administered postal questionnaire in 2004. The questionnaire was sent to 7470 men, aged 30-80 years, from Pirkanmaa County (Finland), of whom 4384 (58.7%) returned the questionnaire. The Danish Prostatic Symptom Score-1 questionnaire was used to evaluate urinary symptoms. Principal component analysis (PCA) was used to evaluate the inter-correlations among various urinary symptoms. The PCA produced a grouping of 12 LUTS into three categories consisting of voiding, storage and incontinence symptoms. Post-micturition symptoms were related to voiding symptoms, but incontinence symptoms were separate from storage symptoms. In the analyses by age group, similar categorization was found at ages 40, 50, 60 and 80 years, but only two groups of symptoms emerged among men aged 70 years. The prevalence among men aged 30 was too low for meaningful analysis. This population-based study suggests that LUTS can be divided into three subgroups consisting of voiding, storage and incontinence symptoms based on their inter-correlations. Our empirical findings suggest an alternative grouping of LUTS. The potential utility of such an approach requires careful consideration. © 2012 BJU International.
NASA Astrophysics Data System (ADS)
Raju, B. S.; Sekhar, U. Chandra; Drakshayani, D. N.
2017-08-01
The paper investigates optimization of stereolithography process for SL5530 epoxy resin material to enhance part quality. The major characteristics indexed for performance selected to evaluate the processes are tensile strength, Flexural strength, Impact strength and Density analysis and corresponding process parameters are Layer thickness, Orientation and Hatch spacing. In this study, the process is intrinsically with multiple parameters tuning so that grey relational analysis which uses grey relational grade as performance index is specially adopted to determine the optimal combination of process parameters. Moreover, the principal component analysis is applied to evaluate the weighting values corresponding to various performance characteristics so that their relative importance can be properly and objectively desired. The results of confirmation experiments reveal that grey relational analysis coupled with principal component analysis can effectively acquire the optimal combination of process parameters. Hence, this confirm that the proposed approach in this study can be an useful tool to improve the process parameters in stereolithography process, which is very useful information for machine designers as well as RP machine users.
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
Learning Assumptions for Compositional Verification
NASA Technical Reports Server (NTRS)
Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)
2002-01-01
Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.
Peiris, Ramila H; Ignagni, Nicholas; Budman, Hector; Moresoli, Christine; Legge, Raymond L
2012-09-15
Characterization of the interactions between natural colloidal/particulate- and protein-like matter is important for understanding their contribution to different physiochemical phenomena like membrane fouling, adsorption of bacteria onto surfaces and various applications of nanoparticles in nanomedicine and nanotoxicology. Precise interpretation of the extent of such interactions is however hindered due to the limitations of most characterization methods to allow rapid, sensitive and accurate measurements. Here we report on a fluorescence-based excitation-emission matrix (EEM) approach in combination with principal component analysis (PCA) to extract information related to the interaction between natural colloidal/particulate- and protein-like matter. Surface plasmon resonance (SPR) analysis and fiber-optic probe based surface fluorescence measurements were used to confirm that the proposed approach can be used to characterize colloidal/particulate-protein interactions at the physical level. This method has potential to be a fundamental measurement of these interactions with the advantage that it can be performed rapidly and with high sensitivity. Copyright © 2012 Elsevier B.V. All rights reserved.
Delfino, Ines; Perna, Giuseppe; Lasalvia, Maria; Capozzi, Vito; Manti, Lorenzo; Camerlingo, Carlo; Lepore, Maria
2015-03-01
A micro-Raman spectroscopy investigation has been performed in vitro on single human mammary epithelial cells after irradiation by graded x-ray doses. The analysis by principal component analysis (PCA) and interval-PCA (i-PCA) methods has allowed us to point out the small differences in the Raman spectra induced by irradiation. This experimental approach has enabled us to delineate radiation-induced changes in protein, nucleic acid, lipid, and carbohydrate content. In particular, the dose dependence of PCA and i-PCA components has been analyzed. Our results have confirmed that micro-Raman spectroscopy coupled to properly chosen data analysis methods is a very sensitive technique to detect early molecular changes at the single-cell level following exposure to ionizing radiation. This would help in developing innovative approaches to monitor radiation cancer radiotherapy outcome so as to reduce the overall radiation dose and minimize damage to the surrounding healthy cells, both aspects being of great importance in the field of radiation therapy.
Six Sigma Approach to Improve Stripping Quality of Automotive Electronics Component – a case study
NASA Astrophysics Data System (ADS)
Razali, Noraini Mohd; Murni Mohamad Kadri, Siti; Con Ee, Toh
2018-03-01
Lacking of problem solving skill techniques and cooperation between support groups are the two obstacles that always been faced in actual production line. Inadequate detail analysis and inappropriate technique in solving the problem may cause the repeating issues which may give impact to the organization performance. This study utilizes a well-structured six sigma DMAIC with combination of other problem solving tools to solve product quality problem in manufacturing of automotive electronics component. The study is concentrated at the stripping process, a critical process steps with highest rejection rate that contribute to the scrap and rework performance. The detail analysis is conducted in the analysis phase to identify the actual root cause of the problem. Then several improvement activities are implemented and the results show that the rejection rate due to stripping defect decrease tremendously and the process capability index improved from 0.75 to 1.67. This results prove that the six sigma approach used to tackle the quality problem is substantially effective.
Li, Leyuan; Zhang, Xu; Ning, Zhibin; Mayne, Janice; Moore, Jasmine I; Butcher, James; Chiang, Cheng-Kang; Mack, David; Stintzi, Alain; Figeys, Daniel
2018-01-05
In vitro culture based approaches are time- and cost-effective solutions for rapidly evaluating the effects of drugs or natural compounds against microbiomes. The nutritional composition of the culture medium is an important determinant for effectively maintaining the gut microbiome in vitro. This study combines orthogonal experimental design and a metaproteomics approach to obtaining functional insights into the effects of different medium components on the microbiome. Our results show that the metaproteomic profile respond differently to medium components, including inorganic salts, bile salts, mucin, and short-chain fatty acids. Multifactor analysis of variance further revealed significant main and interaction effects of inorganic salts, bile salts, and mucin on the different functional groups of gut microbial proteins. While a broad regulating effect was observed on basic metabolic pathways, different medium components also showed significant modulations on cell wall, membrane, and envelope biogenesis and cell motility related functions. In particular, flagellar assembly related proteins were significantly responsive to the presence of mucin. This study provides information on the functional influences of medium components on the in vitro growth of microbiome communities and gives insight on the key components that must be considered when selecting and optimizing media for culturing ex vivo microbiotas.
Finger crease pattern recognition using Legendre moments and principal component analysis
NASA Astrophysics Data System (ADS)
Luo, Rongfang; Lin, Tusheng
2007-03-01
The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.
Gopal, Shruti; Miller, Robyn L; Baum, Stefi A; Calhoun, Vince D
2016-01-01
Identification of functionally connected regions while at rest has been at the forefront of research focusing on understanding interactions between different brain regions. Studies have utilized a variety of approaches including seed based as well as data-driven approaches to identifying such networks. Most such techniques involve differentiating groups based on group mean measures. There has been little work focused on differences in spatial characteristics of resting fMRI data. We present a method to identify between group differences in the variability in the cluster characteristics of network regions within components estimated via independent vector analysis (IVA). IVA is a blind source separation approach shown to perform well in capturing individual subject variability within a group model. We evaluate performance of the approach using simulations and then apply to a relatively large schizophrenia data set (82 schizophrenia patients and 89 healthy controls). We postulate, that group differences in the intra-network distributional characteristics of resting state network voxel intensities might indirectly capture important distinctions between the brain function of healthy and clinical populations. Results demonstrate that specific areas of the brain, superior, and middle temporal gyrus that are involved in language and recognition of emotions, show greater component level variance in amplitude weights for schizophrenia patients than healthy controls. Statistically significant correlation between component level spatial variance and component volume was observed in 19 of the 27 non-artifactual components implying an evident relationship between the two parameters. Additionally, the greater spread in the distance of the cluster peak of a component from the centroid in schizophrenia patients compared to healthy controls was observed for seven components. These results indicate that there is hidden potential in exploring variance and possibly higher-order measures in resting state networks to better understand diseases such as schizophrenia. It furthers comprehension of how spatial characteristics can highlight previously unexplored differences between populations such as schizophrenia patients and healthy controls.
Cell module and fuel conditioner development
NASA Technical Reports Server (NTRS)
Feret, J. M.
1982-01-01
The efforts performed to develop a phosphoric acid fuel cell (PAFC) stack design having a 10 kW power rating for operation at higher than atmospheric pressure based on the existing Mark II design configuration are described. The work involves: (1) Performance of pertinent functional analysis, trade studies and thermodynamic cycle analysis for requirements definition and system operating parameter selection purposes, (2) characterization of fuel cell materials and components, and performance testing and evaluation of the repeating electrode components, (3) establishment of the state-of-the-art manufacturing technology for all fuel cell components at Westinghouse and the fabrication of short stacks of various sites, and (4) development of a 10 kW PAFC stack design for higher pressure operation utilizing the top down systems engineering approach.
Chromatographic and electrophoretic approaches in ink analysis.
Zlotnick, J A; Smith, F P
1999-10-15
Inks are manufactured from a wide variety of substances that exhibit very different chemical behaviors. Inks designed for use in different writing instruments or printing methods have quite dissimilar components. Since the 1950s chromatographic and electrophoretic methods have played important roles in the analysis of inks, where compositional information may have bearing on the investigation of counterfeiting, fraud, forgery, and other crimes. Techniques such as paper chromatography and electrophoresis, thin-layer chromatography, high-performance liquid chromatography, gas chromatography, gel electrophoresis, and the relatively new technique of capillary electrophoresis have all been explored as possible avenues for the separation of components of inks. This paper reviews the components of different types of inks and applications of the above separation methods are reviewed.
Visual unit analysis: a descriptive approach to landscape assessment
R. J. Tetlow; S. R. J. Sheppard
1979-01-01
Analysis of the visible attributes of landscapes is an important component of the planning process. When landscapes are at regional scale, economical and effective methodologies are critical. The Visual Unit concept appears to offer a logical and useful framework for description and evaluation. The concept subdivides landscape into coherent, spatially-defined units....
Vairavan, Srinivasan; Eswaran, Hari; Preissl, Hubert; Wilson, James D; Haddad, Naim; Lowery, Curtis L; Govindan, Rathinaswamy B
2010-01-01
The fetal magnetoencephalogram (fMEG) is measured in the presence of large interference from maternal and fetal magnetocardiograms (mMCG and fMCG). These cardiac interferences can be attenuated by orthogonal projection (OP) technique of the corresponding spatial vectors. However, the OP technique redistributes the fMEG signal among the channels and also leaves some cardiac residuals (partially attenuated mMCG and fMCG) due to loss of stationarity in the signal. In this paper, we propose a novel way to extract and localize the neonatal and fetal spontaneous brain activity by using independent component analysis (ICA) technique. In this approach, we perform ICA on a small subset of sensors for 1-min duration. The independent components obtained are further investigated for the presence of discontinuous patterns as identified by the Hilbert phase analysis and are used as decision criteria for localizing the spontaneous brain activity. In order to locate the region of highest spontaneous brain activity content, this analysis is performed on the sensor subsets, which are traversed across the entire sensor space. The region of the spontaneous brain activity as identified by the proposed approach correlated well with the neonatal and fetal head location. In addition, the burst duration and the inter-burst interval computed for the identified discontinuous brain patterns are in agreement with the reported values.
Mottram, Hazel R; Woodbury, Simon E; Rossell, J Barry; Evershed, Richard P
2003-01-01
Maize oil commands a premium price and is thus a target for adulteration with cheaper vegetable oils. Detection of this activity presents a particular challenge to the analyst because of the natural variability in the fatty acid composition of maize oils and because of their high sterol and tocopherol contents. This paper describes a method that allows detection of adulteration at concentrations of just 5% (m/m), based on the Mahalanobis distances of the principal component scores of the delta(13)C values of major and minor vegetable oil components. The method makes use of a database consisting of delta(13)C values and relative abundances of the major fatty acyl components of over 150 vegetable oils. The sterols and tocopherols of 16 maize oils and 6 potential adulterant oils were found to be depleted in (13)C by a constant amount relative to the bulk oil. Moreover, since maize oil contains particularly high levels of sterols and tocopherols, their delta(13)C values were not significantly altered when groundnut oil was added up to 20% (m/m) and it is possible to use the values for the minor components to predict the values that would be expected in a pure oil; therefore, comparison of the predicted values with those obtained experimentally allows adulteration to be detected. A refinement involved performing a discriminant analysis on the delta(13)C values of the bulk oil and the major fatty acids (16:0, 18:1 and 18:2) and using the Mahalanobis distances to determine the percentage of adulterant oil present. This approach may be refined further by including the delta(13)C values of the minor components in the discriminant analysis thereby increasing the sensitivity of the approach to concentrations at which adulteration would not be attractive economically. Copyright 2003 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beppler, Christina L
2015-12-01
A new approach was created for studying energetic material degradation. This approach involved detecting and tentatively identifying non-volatile chemical species by liquid chromatography-mass spectrometry (LC-MS) with multivariate statistical data analysis that form as the CL-20 energetic material thermally degraded. Multivariate data analysis showed clear separation and clustering of samples based on sample group: either pristine or aged material. Further analysis showed counter-clockwise trends in the principal components analysis (PCA), a type of multivariate data analysis, Scores plots. These trends may indicate that there was a discrete shift in the chemical markers as the went from pristine to aged material, andmore » then again when the aged CL-20 mixed with a potentially incompatible material was thermally aged for 4, 6, or 9 months. This new approach to studying energetic material degradation should provide greater knowledge of potential degradation markers in these materials.« less
Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach
Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.
2007-01-01
Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408
Quantile regression in the presence of monotone missingness with sensitivity analysis
Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.
2016-01-01
In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008
Wayne Swank; Jennifer Knoepp; James Vose; Stephanie Laseter; Jackson Webster
2014-01-01
Watershed ecosystem analysis provides a scientific approach to quantify and integrate resource responses to management (Hornbeck and Swank 1992) and also to address issues of resource sustainability (Christensen et. al. 1996). Philosophical components of the research approach at Coweeta are 1) the quantity, timing, and quality of streamflow provides an integrated...
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
Genetic surgery - a right strategy to attack cancer.
Sverdlov, Eugene D
2011-12-01
The approaches now united under the term "gene therapy" can be divided into two broad strategies: (1) strategy using the ideology of molecular targeted therapy, but with genes in the role of agents targeted at certain molecular component(s) or pathways presumably crucial for cancer maintenance; (ii) strategy aimed at the destruction of tumors as a whole exploiting the features shared by all cancers, for example relatively fast mitotic cell division. While the first strategy is "true" gene therapy, the second one, as e.g. suicide gene therapy, is more like genetic surgery, when a surgeon just cuts off a tumor being not interested in subtle genetic mechanisms of cancer emergence and progression. This approach inherits the ideology of chemotherapy but escapes its severe toxic effects due to intracellular formation of toxic agents. Genetic surgery seems to be the most appropriate approach to combat cancer, and its simplicity is paradoxically adequate to the super-complexity of tumors. The review consists of three parts: (i) analysis of the reasons of tumor supercomplexity and fatally inevitable failure of molecular targeted therapy, (ii) general principles of the genetic surgery strategy, and (iii) examples of genetic surgery approaches with analysis of their drawbacks and the ways for their improvement.
Huang, Tao; Ning, Ziwan; Hu, Dongdong; Zhang, Man; Zhao, Ling; Lin, Chengyuan; Zhong, Linda L D; Yang, Zhijun; Xu, Hongxi; Bian, Zhaoxiang
2018-01-01
MaZiRenWan (MZRW, also known as Hemp Seed Pill) is a Chinese Herbal Medicine which has been demonstrated to safely and effectively alleviate functional constipation (FC) in a randomized, placebo-controlled clinical study with 120 subjects. However, the underlying pharmacological actions of MZRW for FC, are still largely unknown. We systematically analyzed the bioactive compounds of MZRW and mechanism-of-action biological targets through a novel approach called "focused network pharmacology." Among the 97 compounds identified by UPLC-QTOF-MS/MS in MZRW extract, 34 were found in rat plasma, while 10 were found in rat feces. Hierarchical clustering analysis suggest that these compounds can be classified into component groups, in which compounds are highly similar to each other and most of them are from the same herb. Emodin, amygdalin, albiflorin, honokiol, and naringin were selected as representative compounds of corresponding component groups. All of them were shown to induce spontaneous contractions of rat colonic smooth muscle in vitro . Network analysis revealed that biological targets in acetylcholine-, estrogen-, prostaglandin-, cannabinoid-, and purine signaling pathways are able to explain the prokinetic effects of representative compounds and corresponding component groups. In conclusion, MZRW active components enhance colonic motility, possibly by acting on multiple targets and pathways.
Huang, Tao; Ning, Ziwan; Hu, Dongdong; Zhang, Man; Zhao, Ling; Lin, Chengyuan; Zhong, Linda L. D.; Yang, Zhijun; Xu, Hongxi; Bian, Zhaoxiang
2018-01-01
MaZiRenWan (MZRW, also known as Hemp Seed Pill) is a Chinese Herbal Medicine which has been demonstrated to safely and effectively alleviate functional constipation (FC) in a randomized, placebo-controlled clinical study with 120 subjects. However, the underlying pharmacological actions of MZRW for FC, are still largely unknown. We systematically analyzed the bioactive compounds of MZRW and mechanism-of-action biological targets through a novel approach called “focused network pharmacology.” Among the 97 compounds identified by UPLC-QTOF-MS/MS in MZRW extract, 34 were found in rat plasma, while 10 were found in rat feces. Hierarchical clustering analysis suggest that these compounds can be classified into component groups, in which compounds are highly similar to each other and most of them are from the same herb. Emodin, amygdalin, albiflorin, honokiol, and naringin were selected as representative compounds of corresponding component groups. All of them were shown to induce spontaneous contractions of rat colonic smooth muscle in vitro. Network analysis revealed that biological targets in acetylcholine-, estrogen-, prostaglandin-, cannabinoid-, and purine signaling pathways are able to explain the prokinetic effects of representative compounds and corresponding component groups. In conclusion, MZRW active components enhance colonic motility, possibly by acting on multiple targets and pathways. PMID:29632490
EFICAz2: enzyme function inference by a combined approach enhanced by machine learning.
Arakaki, Adrian K; Huang, Ying; Skolnick, Jeffrey
2009-04-13
We previously developed EFICAz, an enzyme function inference approach that combines predictions from non-completely overlapping component methods. Two of the four components in the original EFICAz are based on the detection of functionally discriminating residues (FDRs). FDRs distinguish between member of an enzyme family that are homofunctional (classified under the EC number of interest) or heterofunctional (annotated with another EC number or lacking enzymatic activity). Each of the two FDR-based components is associated to one of two specific kinds of enzyme families. EFICAz exhibits high precision performance, except when the maximal test to training sequence identity (MTTSI) is lower than 30%. To improve EFICAz's performance in this regime, we: i) increased the number of predictive components and ii) took advantage of consensual information from the different components to make the final EC number assignment. We have developed two new EFICAz components, analogs to the two FDR-based components, where the discrimination between homo and heterofunctional members is based on the evaluation, via Support Vector Machine models, of all the aligned positions between the query sequence and the multiple sequence alignments associated to the enzyme families. Benchmark results indicate that: i) the new SVM-based components outperform their FDR-based counterparts, and ii) both SVM-based and FDR-based components generate unique predictions. We developed classification tree models to optimally combine the results from the six EFICAz components into a final EC number prediction. The new implementation of our approach, EFICAz2, exhibits a highly improved prediction precision at MTTSI < 30% compared to the original EFICAz, with only a slight decrease in prediction recall. A comparative analysis of enzyme function annotation of the human proteome by EFICAz2 and KEGG shows that: i) when both sources make EC number assignments for the same protein sequence, the assignments tend to be consistent and ii) EFICAz2 generates considerably more unique assignments than KEGG. Performance benchmarks and the comparison with KEGG demonstrate that EFICAz2 is a powerful and precise tool for enzyme function annotation, with multiple applications in genome analysis and metabolic pathway reconstruction. The EFICAz2 web service is available at: http://cssb.biology.gatech.edu/skolnick/webservice/EFICAz2/index.html.
Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N
2015-08-15
Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
Kalegowda, Yogesh; Harmer, Sarah L
2013-01-08
Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Ground resonance analysis using a substructure modeling approach
NASA Technical Reports Server (NTRS)
Chen, S.-Y.; Berman, A.; Austin, E. E.
1984-01-01
A convenient and versatile procedure for modeling and analyzing ground resonance phenomena is described and illustrated. A computer program is used which dynamically couples differential equations with nonlinear and time dependent coefficients. Each set of differential equations may represent a component such as a rotor, fuselage, landing gear, or a failed damper. Arbitrary combinations of such components may be formulated into a model of a system. When the coupled equations are formed, a procedure is executed which uses a Floquet analysis to determine the stability of the system. Illustrations of the use of the procedures along with the numerical examples are presented.
Ground resonance analysis using a substructure modeling approach
NASA Technical Reports Server (NTRS)
Chen, S. Y.; Austin, E. E.; Berman, A.
1985-01-01
A convenient and versatile procedure for modeling and analyzing ground resonance phenomena is described and illustrated. A computer program is used which dynamically couples differential equations with nonlinear and time dependent coefficients. Each set of differential equations may represent a component such as a rotor, fuselage, landing gear, or a failed damper. Arbitrary combinations of such components may be formulated into a model of a system. When the coupled equations are formed, a procedure is executed which uses a Floquet analysis to determine the stability of the system. Illustrations of the use of the procedures along with the numerical examples are presented.
Conducting financial due diligence of medical practices.
Louiselle, P
1995-12-01
Many healthcare organizations are acquiring medical practices in an effort to build more integrated systems of healthcare products and services. This acquisition activity must be approached cautiously to ensure that medical practices being acquired do not have deficiencies that would jeopardize integration efforts. Conducting a thorough due diligence analysis of medical practices before finalizing the transaction can limit the acquiring organizations' legal and financial exposure and is a necessary component to the acquisition process. The author discusses the components of a successful financial due diligence analysis and addresses some of the risk factors in a practice acquisition.
Three-dimensional Stress Analysis Using the Boundary Element Method
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Banerjee, P. K.
1984-01-01
The boundary element method is to be extended (as part of the NASA Inelastic Analysis Methods program) to the three-dimensional stress analysis of gas turbine engine hot section components. The analytical basis of the method (as developed in elasticity) is outlined, its numerical implementation is summarized, and the approaches to be followed in extending the method to include inelastic material response indicated.
NASA Technical Reports Server (NTRS)
Basili, V. R.; Zelkowitz, M. V.
1978-01-01
In a brief evaluation of software-related considerations, it is found that suitable approaches for software development depend to a large degree on the characteristics of the particular project involved. An analysis is conducted of development problems in an environment in which ground support software is produced for spacecraft control. The amount of work involved is in the range from 6 to 10 man-years. Attention is given to a general project summary, a programmer/analyst survey, a component summary, a component status report, a resource summary, a change report, a computer program run analysis, aspects of data collection on a smaller scale, progress forecasting, problems of overhead, and error analysis.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
NASA Astrophysics Data System (ADS)
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements.
Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Niu, Xiaoyi; He, Bosai; Du, Yiyang; Sui, Zhenyu; Rong, Weiwei; Wang, Xiaotong; Li, Qing; Bi, Kaishun
2018-06-01
Suanzaoren decoction, as one of the traditional Chinese medicine prescriptions, has been most commonly used in Asian countries and reported to inhibit the process of immunodeficiency insomnia. Polysaccharide is important component which also contributes to the role of immunoprotective and sedative hypnotic effects. This study was aimed to explore the immunoprotective and sedative hypnotic mechanisms of polysaccharide from Suanzaoren decoction by serum metabonomics approach. With this purpose, complex physical and chemical immunodeficiency insomnia models were firstly established according to its multi-target property. Serum samples were analyzed using UHPLC/Q-TOF-MS spectrometry approach to determine endogenous metabolites. Then, principal component analysis was used to distinguish the groups, and partial least squares discriminate analysis was carried out to confirm the important variables. The serum metabolic profiling was identified and pathway analysis was performed after the total polysaccharide administration. The twenty-one potential biomarkers were screened, and the levels were all reversed to different degrees in the total polysaccharide treated groups. These potential biomarkers were mainly related to vitamin, sphingolipid, bile acid, phospholipid and acylcarnitine metabolisms. The result has indicated that total polysaccharide could inhibit insomnia triggered by immunodeficiency stimulation through regulating those metabolic pathways. This study provides a useful approach for exploring the mechanism and evaluating the efficacy of total polysaccharide from Suanzaoren decoction. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Watershed Modeling Recommendation Report for Lake Champlain TMDL
This report describes the recommended modeling approach for watershed modeling component of the Lake Champlain TMDL project. The report was prepared by Tetra Tech, with input from the Lake Champlain watershed analysis workgroup. (TetraTech, 2012a)
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
Efficiency analysis for 3D filtering of multichannel images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
The structure of cross-cultural musical diversity.
Rzeszutek, Tom; Savage, Patrick E; Brown, Steven
2012-04-22
Human cultural traits, such as languages, musics, rituals and material objects, vary widely across cultures. However, the majority of comparative analyses of human cultural diversity focus on between-culture variation without consideration for within-culture variation. In contrast, biological approaches to genetic diversity, such as the analysis of molecular variance (AMOVA) framework, partition genetic diversity into both within- and between-population components. We attempt here for the first time to quantify both components of cultural diversity by applying the AMOVA model to music. By employing this approach with 421 traditional songs from 16 Austronesian-speaking populations, we show that the vast majority of musical variability is due to differences within populations rather than differences between. This demonstrates a striking parallel to the structure of genetic diversity in humans. A neighbour-net analysis of pairwise population musical divergence shows a large amount of reticulation, indicating the pervasive occurrence of borrowing and/or convergent evolution of musical features across populations.
The structure of cross-cultural musical diversity
Rzeszutek, Tom; Savage, Patrick E.; Brown, Steven
2012-01-01
Human cultural traits, such as languages, musics, rituals and material objects, vary widely across cultures. However, the majority of comparative analyses of human cultural diversity focus on between-culture variation without consideration for within-culture variation. In contrast, biological approaches to genetic diversity, such as the analysis of molecular variance (AMOVA) framework, partition genetic diversity into both within- and between-population components. We attempt here for the first time to quantify both components of cultural diversity by applying the AMOVA model to music. By employing this approach with 421 traditional songs from 16 Austronesian-speaking populations, we show that the vast majority of musical variability is due to differences within populations rather than differences between. This demonstrates a striking parallel to the structure of genetic diversity in humans. A neighbour-net analysis of pairwise population musical divergence shows a large amount of reticulation, indicating the pervasive occurrence of borrowing and/or convergent evolution of musical features across populations. PMID:22072606
Wigman, J T W; van Os, J; Borsboom, D; Wardenaar, K J; Epskamp, S; Klippel, A; Viechtbauer, W; Myin-Germeys, I; Wichers, M
2015-08-01
It has been suggested that the structure of psychopathology is best described as a complex network of components that interact in dynamic ways. The goal of the present paper was to examine the concept of psychopathology from a network perspective, combining complementary top-down and bottom-up approaches using momentary assessment techniques. A pooled Experience Sampling Method (ESM) dataset of three groups (individuals with a diagnosis of depression, psychotic disorder or no diagnosis) was used (pooled N = 599). The top-down approach explored the network structure of mental states across different diagnostic categories. For this purpose, networks of five momentary mental states ('cheerful', 'content', 'down', 'insecure' and 'suspicious') were compared between the three groups. The complementary bottom-up approach used principal component analysis to explore whether empirically derived network structures yield meaningful higher order clusters. Individuals with a clinical diagnosis had more strongly connected moment-to-moment network structures, especially the depressed group. This group also showed more interconnections specifically between positive and negative mental states than the psychotic group. In the bottom-up approach, all possible connections between mental states were clustered into seven main components that together captured the main characteristics of the network dynamics. Our combination of (i) comparing network structure of mental states across three diagnostically different groups and (ii) searching for trans-diagnostic network components across all pooled individuals showed that these two approaches yield different, complementary perspectives in the field of psychopathology. The network paradigm therefore may be useful to map transdiagnostic processes.
Tailored multivariate analysis for modulated enhanced diffraction
Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; ...
2015-10-21
Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. Furthermore, the multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). Furthermore, when applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. In order to develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
Inference of Ancestry in Forensic Analysis II: Analysis of Genetic Data.
Santos, Carla; Phillips, Chris; Gomez-Tato, A; Alvarez-Dios, J; Carracedo, Ángel; Lareu, Maria Victoria
2016-01-01
Three approaches applicable to the analysis of forensic ancestry-informative marker data-STRUCTURE, principal component analysis, and the Snipper Bayesian classification system-are reviewed. Detailed step-by-step guidance is provided for adjusting parameter settings in STRUCTURE with particular regard to their effect when differentiating populations. Several enhancements to the Snipper online forensic classification portal are described, highlighting the added functionality they bring to particular aspects of ancestry-informative SNP analysis in a forensic context.
In acceptance we trust? Conceptualising acceptance as a viable approach to NGO security management.
Fast, Larissa A; Freeman, C Faith; O'Neill, Michael; Rowley, Elizabeth
2013-04-01
This paper documents current understanding of acceptance as a security management approach and explores issues and challenges non-governmental organisations (NGOs) confront when implementing an acceptance approach to security management. It argues that the failure of organisations to systematise and clearly articulate acceptance as a distinct security management approach and a lack of organisational policies and procedures concerning acceptance hinder its efficacy as a security management approach. The paper identifies key and cross-cutting components of acceptance that are critical to its effective implementation in order to advance a comprehensive and systematic concept of acceptance. The key components of acceptance illustrate how organisational and staff functions affect positively or negatively an organisation's acceptance, and include: an organisation's principles and mission, communications, negotiation, programming, relationships and networks, stakeholder and context analysis, staffing, and image. The paper contends that acceptance is linked not only to good programming, but also to overall organisational management and structures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.
A model for the progressive failure of laminated composite structural components
NASA Technical Reports Server (NTRS)
Allen, D. H.; Lo, D. C.
1991-01-01
Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.
Study on nondestructive discrimination of genuine and counterfeit wild ginsengs using NIRS
NASA Astrophysics Data System (ADS)
Lu, Q.; Fan, Y.; Peng, Z.; Ding, H.; Gao, H.
2012-07-01
A new approach for the nondestructive discrimination between genuine wild ginsengs and the counterfeit ones by near infrared spectroscopy (NIRS) was developed. Both discriminant analysis and back propagation artificial neural network (BP-ANN) were applied to the model establishment for discrimination. Optimal modeling wavelengths were determined based on the anomalous spectral information of counterfeit samples. Through principal component analysis (PCA) of various wild ginseng samples, genuine and counterfeit, the cumulative percentages of variance of the principal components were obtained, serving as a reference for principal component (PC) factor determination. Discriminant analysis achieved an identification ratio of 88.46%. With sample' truth values as its outputs, a three-layer BP-ANN model was built, which yielded a higher discrimination accuracy of 100%. The overall results sufficiently demonstrate that NIRS combined with BP-ANN classification algorithm performs better on ginseng discrimination than discriminant analysis, and can be used as a rapid and nondestructive method for the detection of counterfeit wild ginsengs in food and pharmaceutical industry.
A decoupled recursive approach for constrained flexible multibody system dynamics
NASA Technical Reports Server (NTRS)
Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung
1989-01-01
A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.
Remote Sensing Technologies and Geospatial Modelling Hierarchy for Smart City Support
NASA Astrophysics Data System (ADS)
Popov, M.; Fedorovsky, O.; Stankevich, S.; Filipovich, V.; Khyzhniak, A.; Piestova, I.; Lubskyi, M.; Svideniuk, M.
2017-12-01
The approach to implementing the remote sensing technologies and geospatial modelling for smart city support is presented. The hierarchical structure and basic components of the smart city information support subsystem are considered. Some of the already available useful practical developments are described. These include city land use planning, urban vegetation analysis, thermal condition forecasting, geohazard detection, flooding risk assessment. Remote sensing data fusion approach for comprehensive geospatial analysis is discussed. Long-term city development forecasting by Forrester - Graham system dynamics model is provided over Kiev urban area.
Llorach, Rafael; Medina, Sonia; García-Viguera, Cristina; Zafrilla, Pilar; Abellán, José; Jauregui, Olga; Tomás-Barberán, Francisco A; Gil-Izquierdo, Angel; Andrés-Lacueva, Cristina
2014-06-01
Metabolomics has emerged in the field of food and nutrition sciences as a powerful tool for doing profiling approaches. In this context, HPLC-q-TOF-based metabolomics approach was applied to unveil changes in the urinary metabolome in human subjects (n = 51, 23 men and 28 women) after regular aronia-citrus juice (AC-juice) intake (250 mL/day) during 16 weeks compared to individuals given a placebo beverage. Samples were analyzed by HPLC-q-TOF followed by multivariate data analysis (orthogonal signal filtering-partial least square discriminant analysis) that discriminated relevant mass features related to AC-juice intake. The results showed that biomarkers of AC-juice intake including metabolites coming from metabolism of food components as proline betaine, ferulic acid, and two unknown mercapturate derivatives were identified. Discovery of new biomarkers of food intake will help in the building up of the food metabolome and facilitate future insights into the mechanisms of action of dietary components in population health. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A proposed biophysical approach to Visual absorption capability (VAC)
W. C. Yeomans
1979-01-01
In British Columbia, visual analysis is in its formative stages and has only recently been accepted by Government as a resource component, notably within the Resource Analysis Branch, Ministry of Environment. Visual absorption capability (VAC), is an integral factor in visual resource assessment. VAC is examined by the author in the degree to which it relates to...
ERIC Educational Resources Information Center
Meulman, Jacqueline J.; Verboon, Peter
1993-01-01
Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Gifford, Elizabeth V; Tavakoli, Sara; Weingardt, Kenneth R; Finney, John W; Pierson, Heather M; Rosen, Craig S; Hagedorn, Hildi J; Cook, Joan M; Curran, Geoff M
2012-01-01
Evidence-based psychological treatments (EBPTs) are clusters of interventions, but it is unclear how providers actually implement these clusters in practice. A disaggregated measure of EBPTs was developed to characterize clinicians' component-level evidence-based practices and to examine relationships among these practices. Survey items captured components of evidence-based treatments based on treatment integrity measures. The Web-based survey was conducted with 75 U.S. Department of Veterans Affairs (VA) substance use disorder (SUD) practitioners and 149 non-VA community-based SUD practitioners. Clinician's self-designated treatment orientations were positively related to their endorsement of those EBPT components; however, clinicians used components from a variety of EBPTs. Hierarchical cluster analysis indicated that clinicians combined and organized interventions from cognitive-behavioral therapy, the community reinforcement approach, motivational interviewing, structured family and couples therapy, 12-step facilitation, and contingency management into clusters including empathy and support, treatment engagement and activation, abstinence initiation, and recovery maintenance. Understanding how clinicians use EBPT components may lead to improved evidence-based practice dissemination and implementation. Published by Elsevier Inc.
A nonlinear circuit architecture for magnetoencephalographic signal analysis.
Bucolo, M; Fortuna, L; Frasca, M; La Rosa, M; Virzì, M C; Shannahoff-Khalsa, D
2004-01-01
The objective of this paper was to face the complex spatio-temporal dynamics shown by Magnetoencephalography (MEG) data by applying a nonlinear distributed approach for the Blind Sources Separation. The effort was to characterize and differ-entiate the phases of a yogic respiratory exercise used in the treatment of obsessive compulsive disorders. The patient performed a precise respiratory protocol, at one breath per minute for 31 minutes, with 10 minutes resting phase before and after. The two steps of classical Independent Component Approach have been performed by using a Cellular Neural Network with two sets of templates. The choice of the couple of suitable templates has been carried out using genetic algorithm optimization techniques. Performing BSS with a nonlinear distributed approach, the outputs of the CNN have been compared to the ICA ones. In all the protocol phases, the main components founded with CNN have similar trends compared with that ones obtained with ICA. Moreover, using this distributed approach, a spatial location has been associated to each component. To underline the spatio-temporal and the nonlinearly of the neural process a distributed nonlinear architecture has been proposed. This strategy has been designed in order to overcome the hypothesis of linear combination among the sources signals, that is characteristic of the ICA approach, taking advantage of the spatial information.
Kuzmin, Michael G; Soboleva, Irina V
2014-05-01
Representation of the experimental reaction kinetics in the form of rate distribution is shown to be an effective method for the analysis of the mechanisms of these reactions and for comparisons of the kinetics with QC calculations, as well as with the experimental data on the medium mobility. The rate constant distribution function P(k) can be obtained directly from the experimental kinetics N(t) by an inverse Laplace transform. The application of this approach to kinetic data for several excited-state electron transfer reactions reveals the transformations of their rate control factors in the time domain of 1-1000 ps. In neat electron donating solvents two components are observed. The fastest component (k > 1 ps(-1)) was found to be controlled by the fluctuations of the overall electronic coupling matrix element, involving all the reactant molecules, located inside the interior of the solvent shell, rather than for specific pairs of reactant molecules. The slower component (1 > k > 0.1 ps(-1)) is controlled by the medium reorganization (longitudinal relaxation times, τL). A substantial contribution from the non-stationary diffusion controlled reaction is observed in diluted solutions ([Q] < 1 M). No contribution from the long-distance electron transfer (electron tunneling) proposed earlier for the excited-state electron transfer between perylene and tetracyanoethylene in acetonitrile is observed. The rate distribution approach provides a simple and efficient method for the quantitative analysis of the reaction mechanism and transformation of the rate control factors in the course of the reactions.
NASA Astrophysics Data System (ADS)
Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.
2010-04-01
In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus enables efficient structural modification predictions of the global modes, so that efficient vibro-acoustic design modification, optimization and robust design become possible. The results show that wave-based substructuring offers a clear benefit for vehicle design modifications, by improving both the speed of component reduction processes and the efficiency and accuracy of design iteration predictions, as compared to conventional substructuring approaches.
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Schell, Spencer J.
2012-01-01
agebrush ecosystems in North America have experienced extensive degradation since European settlement. Further degradation continues from exotic invasive plants, altered fire frequency, intensive grazing practices, oil and gas development, and climate change – adding urgency to the need for ecosystem-wide understanding. Remote sensing is often identified as a key information source to facilitate ecosystem-wide characterization, monitoring, and analysis; however, approaches that characterize sagebrush with sufficient and accurate local detail across large enough areas to support this paradigm are unavailable. We describe the development of a new remote sensing sagebrush characterization approach for the state of Wyoming, U.S.A. This approach integrates 2.4 m QuickBird, 30 m Landsat TM, and 56 m AWiFS imagery into the characterization of four primary continuous field components including percent bare ground, percent herbaceous cover, percent litter, and percent shrub, and four secondary components including percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata Wyomingensis), and shrub height using a regression tree. According to an independent accuracy assessment, primary component root mean square error (RMSE) values ranged from 4.90 to 10.16 for 2.4 m QuickBird, 6.01 to 15.54 for 30 m Landsat, and 6.97 to 16.14 for 56 m AWiFS. Shrub and herbaceous components outperformed the current data standard called LANDFIRE, with a shrub RMSE value of 6.04 versus 12.64 and a herbaceous component RMSE value of 12.89 versus 14.63. This approach offers new advancements in sagebrush characterization from remote sensing and provides a foundation to quantitatively monitor these components into the future.
Advances in risk assessment and communication.
Goldstein, Bernard D
2005-01-01
Risk analysis continues to evolve. There is increasing depth and breadth to each component of the four-step risk-assessment paradigm of hazard identification, dose-response analysis, exposure assessment, and risk characterization. Basic conceptual approaches to understanding how people perceive risk are being tested against a growing body of empirical observations, many involving stakeholders. Emerging ideas such as the precautionary principle have provided challenges that have led to a rethinking of the role of risk assessment in environmental health. Newer problems, such as intergenerational issues posed by long-lasting radiation pollution, environmental justice, and the assessment and communication of risks related to terrorism, have spurred innovative approaches to risk analysis.
NASA Astrophysics Data System (ADS)
Carello, M.; Amirth, N.; Airale, A. G.; Monti, M.; Romeo, A.
2017-12-01
Advanced thermoplastic prepreg composite materials stand out with regard to their ability to allow complex designs with high specific strength and stiffness. This makes them an excellent choice for lightweight automotive components to reduce mass and increase fuel efficiency, while maintaining the functionality of traditional thermosetting prepreg (and mechanical characteristics) and with a production cycle time and recyclability suited to mass production manufacturing. Currently, the aerospace and automotive sectors struggle to carry out accurate Finite Elements (FE) component analyses and in some cases are unable to validate the obtained results. In this study, structural Finite Elements Analysis (FEA) has been done on a thermoplastic fiber reinforced component designed and manufactured through an integrated injection molding process, which consists in thermoforming the prepreg laminate and overmolding the other parts. This process is usually referred to as hybrid molding, and has the provision to reinforce the zones subjected to additional stresses with thermoformed themoplastic prepreg as required and overmolded with a shortfiber thermoplastic resin in single process. This paper aims to establish an accurate predictive model on a rational basis and an innovative methodology for the structural analysis of thermoplastic composite components by comparison with the experimental tests results.
Widjaja, Effendi; Garland, Marc
2008-02-01
Raman microscopy was used in mapping mode to collect more than 1000 spectra in a 100 microm x 100 microm area from a commercial stamp. Band-target entropy minimization (BTEM) was then employed to unmix the mixture spectra in order to extract the pure component spectra of the samples. Three pure component spectral patterns with good signal-to-noise ratios were recovered, and their spatial distributions were determined. The three pure component spectral patterns were then identified as copper phthalocyanine blue, calcite-like material, and yellow organic dye material by comparison to known spectral libraries. The present investigation, consisting of (1) advanced curve resolution (blind-source separation) followed by (2) spectral data base matching, readily suggests extensions to authenticity and counterfeit studies of other types of commercial objects. The presence or absence of specific observable components form the basis for assessment. The present spectral analysis (BTEM) is applicable to highly overlapping spectral information. Since a priori information such as the number of components present and spectral libraries are not needed in BTEM, and since minor signals arising from trace components can be reconstructed, this analysis offers a robust approach to a wide variety of material problems involving authenticity and counterfeit issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Li, G., E-mail: gli@clemson.edu
2014-08-28
An accelerated Finite Element Contact Block Reduction (FECBR) approach is presented for computational analysis of ballistic transport in nanoscale electronic devices with arbitrary geometry and unstructured mesh. Finite element formulation is developed for the theoretical CBR/Poisson model. The FECBR approach is accelerated through eigen-pair reduction, lead mode space projection, and component mode synthesis techniques. The accelerated FECBR is applied to perform quantum mechanical ballistic transport analysis of a DG-MOSFET with taper-shaped extensions and a DG-MOSFET with Si/SiO{sub 2} interface roughness. The computed electrical transport properties of the devices obtained from the accelerated FECBR approach and associated computational cost as amore » function of system degrees of freedom are compared with those obtained from the original CBR and direct inversion methods. The performance of the accelerated FECBR in both its accuracy and efficiency is demonstrated.« less
Madeo, Andrea; Piras, Paolo; Re, Federica; Gabriele, Stefano; Nardinocchi, Paola; Teresi, Luciano; Torromeo, Concetta; Chialastri, Claudia; Schiariti, Michele; Giura, Geltrude; Evangelista, Antonietta; Dominici, Tania; Varano, Valerio; Zachara, Elisabetta; Puddu, Paolo Emilio
2015-01-01
The assessment of left ventricular shape changes during cardiac revolution may be a new step in clinical cardiology to ease early diagnosis and treatment. To quantify these changes, only point registration was adopted and neither Generalized Procrustes Analysis nor Principal Component Analysis were applied as we did previously to study a group of healthy subjects. Here, we extend to patients affected by hypertrophic cardiomyopathy the original approach and preliminarily include genotype positive/phenotype negative individuals to explore the potential that incumbent pathology might also be detected. Using 3D Speckle Tracking Echocardiography, we recorded left ventricular shape of 48 healthy subjects, 24 patients affected by hypertrophic cardiomyopathy and 3 genotype positive/phenotype negative individuals. We then applied Generalized Procrustes Analysis and Principal Component Analysis and inter-individual differences were cleaned by Parallel Transport performed on the tangent space, along the horizontal geodesic, between the per-subject consensuses and the grand mean. Endocardial and epicardial layers were evaluated separately, different from many ecocardiographic applications. Under a common Principal Component Analysis, we then evaluated left ventricle morphological changes (at both layers) explained by first Principal Component scores. Trajectories’ shape and orientation were investigated and contrasted. Logistic regression and Receiver Operating Characteristic curves were used to compare these morphometric indicators with traditional 3D Speckle Tracking Echocardiography global parameters. Geometric morphometrics indicators performed better than 3D Speckle Tracking Echocardiography global parameters in recognizing pathology both in systole and diastole. Genotype positive/phenotype negative individuals clustered with patients affected by hypertrophic cardiomyopathy during diastole, suggesting that incumbent pathology may indeed be foreseen by these methods. Left ventricle deformation in patients affected by hypertrophic cardiomyopathy compared to healthy subjects may be assessed by modern shape analysis better than by traditional 3D Speckle Tracking Echocardiography global parameters. Hypertrophic cardiomyopathy pathophysiology was unveiled in a new manner whereby also diastolic phase abnormalities are evident which is more difficult to investigate by traditional ecocardiographic techniques. PMID:25875818
High bit rate convolutional channel encoder/decoder
NASA Technical Reports Server (NTRS)
1977-01-01
A detailed description of the design approach and tradeoffs encountered during the development of the 50 MBPS decoder system is presented. A functional analysis of each of the major logical functions is given, and the system's major components are listed.
Pont, Laura; Sanz-Nebot, Victoria; Vilaseca, Marta; Jaumot, Joaquim; Tauler, Roma; Benavente, Fernando
2018-05-01
In this study, we describe a chemometric data analysis approach to assist in the interpretation of the complex datasets from the analysis of high-molecular mass oligomeric proteins by ion mobility mass spectrometry (IM-MS). The homotetrameric protein transthyretin (TTR) is involved in familial amyloidotic polyneuropathy type I (FAP-I). FAP-I is associated with a specific TTR mutant variant (TTR(Met30)) that can be easily detected analyzing the monomeric forms of the mutant protein. However, the mechanism of protein misfolding and aggregation onset, which could be triggered by structural changes in the native tetrameric protein, remains under investigation. Serum TTR from healthy controls and FAP-I patients was purified under non-denaturing conditions by conventional immunoprecipitation in solution and analyzed by IM-MS. IM-MS allowed separation and characterization of several tetrameric, trimeric and dimeric TTR gas ions due to their differential drift time. After an appropriate data pre-processing, multivariate curve resolution alternating least squares (MCR-ALS) was applied to the complex datasets. A group of seven independent components being characterized by their ion mobility profiles and mass spectra were resolved to explain the observed data variance in control and patient samples. Then, principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were considered for exploration and classification. Only four out of the seven resolved components were enough for an accurate differentiation. Furthermore, the specific TTR ions identified in the mass spectra of these components and the resolved ion mobility profiles provided a straightforward insight into the most relevant oligomeric TTR proteoforms for the disease. Copyright © 2018 Elsevier B.V. All rights reserved.
Intelligent Control in Automation Based on Wireless Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
2007-09-01
Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less
Intelligent Control in Automation Based on Wireless Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less
Christopher B. Dow; Brandon M. Collins; Scott L. Stephens
2016-01-01
Finding novel ways to plan and implement landscape-level forest treatments that protect sensitive wildlife and other key ecosystem components, while also reducing the risk of large-scale, high-severity fires, can prove to be difficult. We examined alternative approaches to landscape-scale fuel-treatment design for the same landscape. These approaches included two...
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Hurysz, B.; Luo, Z.; Denny, Hugh W.; Millard, David P.; Herkert, R.; Wang, R.
1992-01-01
The goal of this research project was to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom. The approach consists of four steps: (1) developing analytical tools (models and computer programs); (2) conducting parameterization (what if?) studies; (3) predicting the global space station EMI environment; and (4) providing a basis for modification of EMI standards.
One-step methods for the prediction of orbital motion, taking its periodic components into account
NASA Astrophysics Data System (ADS)
Lavrov, K. N.
1988-03-01
The paper examines the design and analysis of the properties of implicit one-step integration methods which use the trigonometric approximation of ordinary differential equations containing periodic components. With reference to an orbital-motion prediction example, it is shown that the proposed schemes are more efficient in terms of computer memory than Everhart's (1974) approach. The results obtained make it possible to improve Everhart's method.
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...
2017-01-24
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
NASA Astrophysics Data System (ADS)
Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas
2014-05-01
Experimental data on detection and identification of variety of biochemical agents, such as proteins, microelements, antibiotic of different generation etc. in both single and multi component solutions under varied in wide range concentration analyzed on the light scattering parameters of whispering gallery mode optical resonance based sensor are represented. Multiplexing on parameters and components has been realized using developed fluidic sensor cell with fixed in adhesive layer dielectric microspheres and data processing. Biochemical component identification has been performed by developed network analysis techniques. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis. Novel technique based on optical resonance on microring structures, plasmon resonance and identification tools has been developed. To improve a sensitivity of microring structures microspheres fixed by adhesive had been treated previously by gold nanoparticle solution. Another technique used thin film gold layers deposited on the substrate below adhesive. Both biomolecule and nanoparticle injections caused considerable changes of optical resonance spectra. Plasmonic gold layers under optimized thickness also improve parameters of optical resonance spectra. Biochemical component identification has been also performed by developed network analysis techniques both for single and for multi component solution. So advantages of plasmon enhancing optical microcavity resonance with multiparameter identification tools is used for development of a new platform for ultra sensitive label-free biomedical sensor.
Hybrid 3D reconstruction and image-based rendering techniques for reality modeling
NASA Astrophysics Data System (ADS)
Sequeira, Vitor; Wolfart, Erik; Bovisio, Emanuele; Biotti, Ester; Goncalves, Joao G. M.
2000-12-01
This paper presents a component approach that combines in a seamless way the strong features of laser range acquisition with the visual quality of purely photographic approaches. The relevant components of the system are: (i) Panoramic images for distant background scenery where parallax is insignificant; (ii) Photogrammetry for background buildings and (iii) High detailed laser based models for the primary environment, structure of exteriors of buildings and interiors of rooms. These techniques have a wide range of applications in visualization, virtual reality, cost effective as-built analysis of architectural and industrial environments, building facilities management, real-estate, E-commerce, remote inspection of hazardous environments, TV production and many others.
Engine Data Interpretation System (EDIS)
NASA Technical Reports Server (NTRS)
Cost, Thomas L.; Hofmann, Martin O.
1990-01-01
A prototype of an expert system was developed which applies qualitative or model-based reasoning to the task of post-test analysis and diagnosis of data resulting from a rocket engine firing. A combined component-based and process theory approach is adopted as the basis for system modeling. Such an approach provides a framework for explaining both normal and deviant system behavior in terms of individual component functionality. The diagnosis function is applied to digitized sensor time-histories generated during engine firings. The generic system is applicable to any liquid rocket engine but was adapted specifically in this work to the Space Shuttle Main Engine (SSME). The system is applied to idealized data resulting from turbomachinery malfunction in the SSME.
NASA Astrophysics Data System (ADS)
Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.
2000-08-01
Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
Machine learning of frustrated classical spin models. I. Principal component analysis
NASA Astrophysics Data System (ADS)
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
Neural network for photoplethysmographic respiratory rate monitoring
NASA Astrophysics Data System (ADS)
Johansson, Anders
2001-10-01
The photoplethysmographic signal (PPG) includes respiratory components seen as frequency modulation of the heart rate (respiratory sinus arrhythmia, RSA), amplitude modulation of the cardiac pulse, and respiratory induced intensity variations (RIIV) in the PPG baseline. The aim of this study was to evaluate the accuracy of these components in determining respiratory rate, and to combine the components in a neural network for improved accuracy. The primary goal is to design a PPG ventilation monitoring system. PPG signals were recorded from 15 healthy subjects. From these signals, the systolic waveform, diastolic waveform, respiratory sinus arrhythmia, pulse amplitude and RIIV were extracted. By using simple algorithms, the rates of false positive and false negative detection of breaths were calculated for each of the five components in a separate analysis. Furthermore, a simple neural network (NN) was tried out in a combined pattern recognition approach. In the separate analysis, the error rates (sum of false positives and false negatives) ranged from 9.7% (pulse amplitude) to 14.5% (systolic waveform). The corresponding value of the NN analysis was 9.5-9.6%.
Lorenz, Kevin S.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2013-01-01
Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail, and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artifacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and non-rigid components. The rigid registration component corrects global image translations, while the non-rigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung, and salivary gland of living rodents. PMID:22092443
Ritter, Alison; Lancaster, Kari
2013-01-01
Assessing the extent to which drug research influences and impacts upon policy decision-making needs to go beyond bibliometric analysis of academic citations. Policy makers do not necessarily access the academic literature, and policy processes are largely iterative and rely on interactions and relationships. Furthermore, media representation of research contributes to public opinion and can influence policy uptake. In this context, assessing research influence involves examining the extent to which a research project is taken up in policy documents, used within policy processes, and disseminated via the media. This three component approach is demonstrated using a case example of two ongoing illicit drug monitoring systems: the Illicit Drug Reporting System (IDRS) and the Ecstasy and related Drugs Reporting System (EDRS). Systematic searches for reference to the IDRS and/or EDRS within policy documents, across multiple policy processes (such as parliamentary inquiries) and in the media, in conjunction with analysis of the types of mentions in these three sources, enables an analysis of policy influence. The context for the research is also described as the foundation for the approach. The application of the three component approach to the case study demonstrates a practical and systematic retrospective approach to measure drug research influence. For example, the ways in which the IDRS and EDRS were mentioned in policy documents demonstrated research utilisation. Policy processes were inclusive of IDRS and EDRS findings, while the media analysis revealed only a small contribution in the context of wider media reporting. Consistent with theories of policy processes, assessing the extent of research influence requires a systematic analysis of policy documents and processes. Development of such analyses and associated methods will better equip researchers to evaluate the impact of research. Copyright © 2012 Elsevier B.V. All rights reserved.
Halai, Ajay D; Woollams, Anna M; Lambon Ralph, Matthew A
2017-01-01
Individual differences in the performance profiles of neuropsychologically-impaired patients are pervasive yet there is still no resolution on the best way to model and account for the variation in their behavioural impairments and the associated neural correlates. To date, researchers have generally taken one of three different approaches: a single-case study methodology in which each case is considered separately; a case-series design in which all individual patients from a small coherent group are examined and directly compared; or, group studies, in which a sample of cases are investigated as one group with the assumption that they are drawn from a homogenous category and that performance differences are of no interest. In recent research, we have developed a complementary alternative through the use of principal component analysis (PCA) of individual data from large patient cohorts. This data-driven approach not only generates a single unified model for the group as a whole (expressed in terms of the emergent principal components) but is also able to capture the individual differences between patients (in terms of their relative positions along the principal behavioural axes). We demonstrate the use of this approach by considering speech fluency, phonology and semantics in aphasia diagnosis and classification, as well as their unique neural correlates. PCA of the behavioural data from 31 patients with chronic post-stroke aphasia resulted in four statistically-independent behavioural components reflecting phonological, semantic, executive-cognitive and fluency abilities. Even after accounting for lesion volume, entering the four behavioural components simultaneously into a voxel-based correlational methodology (VBCM) analysis revealed that speech fluency (speech quanta) was uniquely correlated with left motor cortex and underlying white matter (including the anterior section of the arcuate fasciculus and the frontal aslant tract), phonological skills with regions in the superior temporal gyrus and pars opercularis, and semantics with the anterior temporal stem. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.
2017-10-01
ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.
Discriminative Ocular Artifact Correction for Feature Learning in EEG Analysis.
Xinyang Li; Cuntai Guan; Haihong Zhang; Kai Keng Ang
2017-08-01
Electrooculogram (EOG) artifact contamination is a common critical issue in general electroencephalogram (EEG) studies as well as in brain-computer interface (BCI) research. It is especially challenging when dedicated EOG channels are unavailable or when there are very few EEG channels available for independent component analysis based ocular artifact removal. It is even more challenging to avoid loss of the signal of interest during the artifact correction process, where the signal of interest can be multiple magnitudes weaker than the artifact. To address these issues, we propose a novel discriminative ocular artifact correction approach for feature learning in EEG analysis. Without extra ocular movement measurements, the artifact is extracted from raw EEG data, which is totally automatic and requires no visual inspection of artifacts. Then, artifact correction is optimized jointly with feature extraction by maximizing oscillatory correlations between trials from the same class and minimizing them between trials from different classes. We evaluate this approach on a real-world EEG dataset comprising 68 subjects performing cognitive tasks. The results showed that the approach is capable of not only suppressing the artifact components but also improving the discriminative power of a classifier with statistical significance. We also demonstrate that the proposed method addresses the confounding issues induced by ocular movements in cognitive EEG study.
Enhanced definition and required examples of common datum imposed by ISO standard
NASA Astrophysics Data System (ADS)
Yan, Yiqing; Bohn, Martin
2017-12-01
According to the ISO Geometrical Product Specifications (GPS), the establishment and definition of common datum for geometrical components are not fully defined. There are two main limitations of this standard. Firstly: the explications of ISO examples of common datums are not matched with their corresponding definitions, and secondly: a full definition of common datum is missing. This paper suggests a new approach for an enhanced definition and concrete examples of common datum and proposes a holistic methodology for establishment of common datum for each geometrical component. This research is based on the analysis of physical behaviour of geometrical components, orientation constraints and invariance classes of datums. This approach fills the definition gaps of common datum based on ISO GPS, thereby eliminating those deficits. As a result, an improved methodology for a fully functional defined definition of common datum was formulated.
Objective determination of image end-members in spectral mixture analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.
1993-01-01
Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
Liu, Shuqiang; Tan, Zhibin; Li, Pingting; Gao, Xiaoling; Zeng, Yuaner; Wang, Shuling
2016-03-20
HepG2 cells biospecific extraction method and high performance liquid chromatography-electrospray ionization-mass spectrometry (HPLC-ESI-MS) analysis was proposed for screening of potential antiatherosclerotic active components in Bupeuri radix, a well-known Traditional Chinese Medicine (TCM). The hypothesis suggested that when cells are incubated together with the extracts of TCM, the potential bioactive components in the TCM should selectively combine with the receptor or channel of HepG2 cells, then the eluate which contained biospecific component binding to HepG2 cells was identified using HPLC-ESI-MS analysis. The potential bioactive components of Bupeuri radix were investigated using the proposed approach. Five compounds in the saikosaponins of Bupeuri radix were detected as these components selectively combined with HepG2 cells, among these compounds, two potentially bioactive compounds namely saikosaponin b1 and saikosaponin b2 (SSb2) were identified by comparing with the chromatography of the standard sample and analysis of the structural clearance characterization of MS. Then SSb2 was used to assess the uptake of DiI-high density lipoprotein (HDL) in HepG2 cells for antiatherosclerotic activity. The results have showed that SSb2, with indicated concentrations (5, 15, 25, and 40 μM) could remarkably uptake dioctadecylindocarbocyanine labeled- (DiI) -HDL in HepG2 cells (Vs control group, *P<0.01). In conclusion, the application of HepG2 biospecific extraction coupled with HPLC-ESI-MS analysis is a rapid, convenient, and reliable method for screening potential bioactive components in TCM and SSb2 may be a valuable novel drug agent for the treatment of atherosclerosis. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moulin, D.; Chapuliot, S.; Drubay, B.
For structures like vessels or pipes containing a fluid, the Leak-Before-Break (LBB) assessment requires to demonstrate that it is possible, during the lifetime of the component, to detect a rate of leakage due to a possible defect, the growth of which would result in a leak before-break of the component. This LBB assessment could be an important contribution to the overall structural integrity argument for many components. The aim of this paper is to review some practices used for LBB assessment and to describe how some new R & D results have been used to provide a simplified approach ofmore » fracture mechanics analysis and especially the evaluation of crack shape and size during the lifetime of the component.« less
NASA Astrophysics Data System (ADS)
Cretcher, C. K.
1980-11-01
The various types of solar domestic hot water systems are discussed including their advantages and disadvantages. The problems that occur in hydronic solar heating systems are reviewed with emphasis on domestic hot water applicatons. System problems in retrofitting of residential buildings are also discussed including structural and space constraints for various components and subsystems. System design parameters include various collector sizing methods, collector orientation, storage capacity and heat loss from pipes and tanks. The installation costs are broken down by components and subsystems. The approach used for utility economic impact analysis is reviewed. The simulation is described, and the results of the economic impact analysis are given. A summary assessment is included.
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.
Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J
2017-12-01
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.
Kumar, Raj; Kumar, Vinay; Sharma, Vishal
2015-06-01
Diffuse reflectance ultraviolet-visible-near-infrared (UV-Vis-NIR) spectroscopy is applied as a means of differentiating various types of writing, office, and photocopy papers (collected from stationery shops in India) on the basis of reflectance and absorbance spectra that otherwise seem to be almost alike in different illumination conditions. In order to minimize bias, spectra from both sides of paper were obtained. In addition, three spectra from three different locations (from one side) were recorded covering the upper, middle, and bottom portions of the paper sample, and the mean average reflectivity of both the sides was calculated. A significant difference was observed in mean average reflectivity of Side A and Side B of the paper using Student's pair >t-test. Three different approaches were used for discrimination: (1) qualitative features of the whole set of samples, (2) principal component analysis, and (3) a combination of both approaches. On the basis of the first approach, i.e., qualitative features, 96.49% discriminating power (DP) was observed, which shows highly significant results with the UV-Vis-NIR technique. In the second approach the discriminating power is further enhanced by incorporating the principal component analysis (PCA) statistical method, where this method describes each UV-Vis spectrum in a group through numerical loading values connected to the first few principal components. All components described 100% variance of the samples, but only the first three PCs are good enough to explain the variance (PC1 = 51.64%, PC2 = 47.52%, and PC3 = 0.54%) of the samples; i.e., the first three PCs described 99.70% of the data, whereas in the third approach, the four samples, C, G, K, and N, out of a total 19 samples, which were not differentiated using qualitative features (approach no. 1), were therefore subjected to PCA. The first two PCs described 99.37% of the spectral features. The discrimination was achieved by using a loading plot between PC1 and PC2. It is therefore concluded that maximum discrimination of writing, office, and photocopy paper could be achieved on the basis of the second approach. Hence, the present inexpensive analytical method can be appropriate for application to routine questioned document examination work in forensic laboratories because it provides nondestructive, quantitative, reliable, and repeatable results.
Doble, Brett; Wordsworth, Sarah; Rogers, Chris A; Welbourn, Richard; Byrne, James; Blazeby, Jane M
2017-08-01
This review aims to evaluate the current literature on the procedural costs of bariatric surgery for the treatment of severe obesity. Using a published framework for the conduct of micro-costing studies for surgical interventions, existing cost estimates from the literature are assessed for their accuracy, reliability and comprehensiveness based on their consideration of seven 'important' cost components. MEDLINE, PubMed, key journals and reference lists of included studies were searched up to January 2017. Eligible studies had to report per-case, total procedural costs for any type of bariatric surgery broken down into two or more individual cost components. A total of 998 citations were screened, of which 13 studies were included for analysis. Included studies were mainly conducted from a US hospital perspective, assessed either gastric bypass or adjustable gastric banding procedures and considered a range of different cost components. The mean total procedural costs for all included studies was US$14,389 (range, US$7423 to US$33,541). No study considered all of the recommended 'important' cost components and estimation methods were poorly reported. The accuracy, reliability and comprehensiveness of the existing cost estimates are, therefore, questionable. There is a need for a comparative cost analysis of the different approaches to bariatric surgery, with the most appropriate costing approach identified to be micro-costing methods. Such an analysis will not only be useful in estimating the relative cost-effectiveness of different surgeries but will also ensure appropriate reimbursement and budgeting by healthcare payers to ensure barriers to access this effective treatment by severely obese patients are minimised.
A cost analysis: processing maple syrup products
Neil K. Huyler; Lawrence D. Garrett
1979-01-01
A cost analysis of processing maple sap to syrup for three fuel types, oil-, wood-, and LP gas-fired evaporators, indicates that: (1) fuel, capital, and labor are the major cost components of processing sap to syrup; (2) wood-fired evaporators show a slight cost advantage over oil- and LP gas-fired evaporators; however, as the cost of wood approaches $50 per cord, wood...
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Multivariate proteomic profiling identifies novel accessory proteins of coated vesicles
Antrobus, Robin; Hirst, Jennifer; Bhumbra, Gary S.; Kozik, Patrycja; Jackson, Lauren P.; Sahlender, Daniela A.
2012-01-01
Despite recent advances in mass spectrometry, proteomic characterization of transport vesicles remains challenging. Here, we describe a multivariate proteomics approach to analyzing clathrin-coated vesicles (CCVs) from HeLa cells. siRNA knockdown of coat components and different fractionation protocols were used to obtain modified coated vesicle-enriched fractions, which were compared by stable isotope labeling of amino acids in cell culture (SILAC)-based quantitative mass spectrometry. 10 datasets were combined through principal component analysis into a “profiling” cluster analysis. Overall, 136 CCV-associated proteins were predicted, including 36 new proteins. The method identified >93% of established CCV coat proteins and assigned >91% correctly to intracellular or endocytic CCVs. Furthermore, the profiling analysis extends to less well characterized types of coated vesicles, and we identify and characterize the first AP-4 accessory protein, which we have named tepsin. Finally, our data explain how sequestration of TACC3 in cytosolic clathrin cages causes the severe mitotic defects observed in auxilin-depleted cells. The profiling approach can be adapted to address related cell and systems biological questions. PMID:22472443
Ortiz-Villanueva, Elena; Tauler, Romà
2017-01-01
Metabolomics is a powerful and widely used approach that aims to screen endogenous small molecules (metabolites) of different families present in biological samples. The large variety of compounds to be determined and their wide diversity of physical and chemical properties have promoted the development of different types of hydrophilic interaction liquid chromatography (HILIC) stationary phases. However, the selection of the most suitable HILIC stationary phase is not straightforward. In this work, four different HILIC stationary phases have been compared to evaluate their potential application for the analysis of a complex mixture of metabolites, a situation similar to that found in non-targeted metabolomics studies. The obtained chromatographic data were analyzed by different chemometric methods to explore the behavior of the considered stationary phases. ANOVA-simultaneous component analysis (ASCA), principal component analysis (PCA) and partial least squares regression (PLS) were used to explore the experimental factors affecting the stationary phase performance, the main similarities and differences among chromatographic conditions used (stationary phase and pH) and the molecular descriptors most useful to understand the behavior of each stationary phase. PMID:29064436
Scampicchio, Matteo; Mimmo, Tanja; Capici, Calogero; Huck, Christian; Innocente, Nadia; Drusch, Stephan; Cesco, Stefano
2012-11-14
Stable isotope values were used to develop a new analytical approach enabling the simultaneous identification of milk samples either processed with different heating regimens or from different geographical origins. The samples consisted of raw, pasteurized (HTST), and ultrapasteurized (UHT) milk from different Italian origins. The approach consisted of the analysis of the isotope ratio of δ¹³C and δ¹⁵N for the milk samples and their fractions (fat, casein, and whey). The main finding of this work is that as the heat processing affects the composition of the milk fractions, changes in δ¹³C and δ¹⁵N were also observed. These changes were used as markers to develop pattern recognition maps based on principal component analysis and supervised classification models, such as linear discriminant analysis (LDA), multivariate regression (MLR), principal component regression (PCR), and partial least-squares (PLS). The results give proof of the concept that isotope ratio mass spectroscopy can discriminate simultaneously between milk samples according to their geographical origin and type of processing.
E-learning platform for automated testing of electronic circuits using signature analysis method
NASA Astrophysics Data System (ADS)
Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel
2016-12-01
Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.
Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.
Kiran Kumar, G R; Reddy, M Ramasubba
2018-06-08
Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.
Moisture and Structural Analysis for High Performance Hybrid Wall Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grin, A.; Lstiburek, J.
2012-09-01
This report describes the work conducted by the Building Science Corporation (BSC) Building America Research Team's 'Energy Efficient Housing Research Partnerships' project. Based on past experience in the Building America program, they have found that combinations of materials and approaches---in other words, systems--usually provide optimum performance. No single manufacturer typically provides all of the components for an assembly, nor has the specific understanding of all the individual components necessary for optimum performance.
Helicopter Drive System on-Condition Maintenance Capability (UH-1/AH-1)
1976-07-01
Capability 17 2.2 FACTORS LIMITING THE PERFORMANCE OF THE STUDY . . 18 2.3 ASSUMPTIONS USED IN THE ANALYSIS 20 2.3.1 Assembly Operability 20 2.3.2...DRIVE SYSTEM COMPONENTS AND THEIR EFFECTIVITY 64 2 AIRCRAFT TECHNICAL MANUALS USED DURING THE STUDY 66 3 DRIVE SYSTEM COMPONENTS DA2410 RECORDS...the federal stock number, and current effectivity of these assemblies. The general approach used was to examine the overhaul and acci- dent records
Mantini, D.; Marzetti, L.; Corbetta, M.; Romani, G.L.; Del Gratta, C.
2017-01-01
Two major non-invasive brain mapping techniques, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have complementary advantages with regard to their spatial and temporal resolution. We propose an approach based on the integration of EEG and fMRI, enabling the EEG temporal dynamics of information processing to be characterized within spatially well-defined fMRI large-scale networks. First, the fMRI data are decomposed into networks by means of spatial independent component analysis (sICA), and those associated with intrinsic activity and/or responding to task performance are selected using information from the related time-courses. Next, the EEG data over all sensors are averaged with respect to event timing, thus calculating event-related potentials (ERPs). The ERPs are subjected to temporal ICA (tICA), and the resulting components are localized with the weighted minimum norm (WMNLS) algorithm using the task-related fMRI networks as priors. Finally, the temporal contribution of each ERP component in the areas belonging to the fMRI large-scale networks is estimated. The proposed approach has been evaluated on visual target detection data. Our results confirm that two different components, commonly observed in EEG when presenting novel and salient stimuli respectively, are related to the neuronal activation in large-scale networks, operating at different latencies and associated with different functional processes. PMID:20052528
Dynamic analysis of flexible mechanical systems using LATDYN
NASA Technical Reports Server (NTRS)
Wu, Shih-Chin; Chang, Che-Wei; Housner, Jerrold M.
1989-01-01
A 3-D, finite element based simulation tool for flexible multibody systems is presented. Hinge degrees-of-freedom is built into equations of motion to reduce geometric constraints. The approach avoids the difficulty in selecting deformation modes for flexible components by using assumed mode method. The tool is applied to simulate a practical space structure deployment problem. Results of examples demonstrate the capability of the code and approach.
A novel approach to analyzing fMRI and SNP data via parallel independent component analysis
NASA Astrophysics Data System (ADS)
Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas
2007-03-01
There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.
A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use
Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien
2016-01-01
Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553
Assessing value-for-money in maternal and newborn health
Madaj, Barbara; Kumar, Shubha
2017-01-01
Responding to increasing demands to demonstrate value-for-money (VfM) for maternal and newborn health interventions, and in the absence of VfM analysis in peer-reviewed literature, this paper reviews VfM components and methods, critiques their applicability, strengths and weakness and proposes how VfM assessments can be improved. VfM comprises four components: economy, efficiency, effectiveness and cost-effectiveness. Both ‘economy’ and ‘efficiency’ can be assessed with detailed cost analysis utilising costs obtained from programme accounting data or generic cost databases. Before-and-after studies, case–control studies or randomised controlled trials can be used to assess ‘effectiveness’. To assess ‘cost-effectiveness’, cost-effectiveness analysis (CEA), cost-utility analysis (CUA), cost-benefit analysis (CBA) or social return on investment (SROI) analysis are applicable. Generally, costs can be obtained from programme accounting data or existing generic cost databases. As such ‘economy’ and ‘efficiency’ are relatively easy to assess. However, ‘effectiveness’ and ‘cost-effectiveness’ which require establishment of the counterfactual are more difficult to ascertain. Either a combination of CEA or CUA with tools for assessing other VfM components, or the independent use of CBA or SROI are alternative approaches proposed to strengthen VfM assessments. Cross-cutting themes such as equity, sustainability, scalability and cultural acceptability should also be assessed, as they provide critical contextual information for interpreting VfM assessments. To select an assessment approach, consideration should be given to the purpose, data availability, stakeholders requiring the findings and perspectives of programme beneficiaries. Implementers and researchers should work together to improve the quality of assessments. Standardisation around definitions, methodology and effectiveness measures to be assessed would help. PMID:29081998
Assessing value-for-money in maternal and newborn health.
Banke-Thomas, Aduragbemi; Madaj, Barbara; Kumar, Shubha; Ameh, Charles; van den Broek, Nynke
2017-01-01
Responding to increasing demands to demonstrate value-for-money (VfM) for maternal and newborn health interventions, and in the absence of VfM analysis in peer-reviewed literature, this paper reviews VfM components and methods, critiques their applicability, strengths and weakness and proposes how VfM assessments can be improved. VfM comprises four components: economy, efficiency, effectiveness and cost-effectiveness. Both 'economy' and 'efficiency' can be assessed with detailed cost analysis utilising costs obtained from programme accounting data or generic cost databases. Before-and-after studies, case-control studies or randomised controlled trials can be used to assess 'effectiveness'. To assess 'cost-effectiveness', cost-effectiveness analysis (CEA), cost-utility analysis (CUA), cost-benefit analysis (CBA) or social return on investment (SROI) analysis are applicable. Generally, costs can be obtained from programme accounting data or existing generic cost databases. As such 'economy' and 'efficiency' are relatively easy to assess. However, 'effectiveness' and 'cost-effectiveness' which require establishment of the counterfactual are more difficult to ascertain. Either a combination of CEA or CUA with tools for assessing other VfM components, or the independent use of CBA or SROI are alternative approaches proposed to strengthen VfM assessments. Cross-cutting themes such as equity, sustainability, scalability and cultural acceptability should also be assessed, as they provide critical contextual information for interpreting VfM assessments. To select an assessment approach, consideration should be given to the purpose, data availability, stakeholders requiring the findings and perspectives of programme beneficiaries. Implementers and researchers should work together to improve the quality of assessments. Standardisation around definitions, methodology and effectiveness measures to be assessed would help.
ICA-based artefact and accelerated fMRI acquisition for improved Resting State Network imaging
Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F.; Auerbach, Edward J.; Douaud, Gwenaëlle; Sexton, Claire E.; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E.; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L.; Smith, Stephen M.
2014-01-01
The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB’s ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures were assessed using timeseries (amplitude and spectra), network matrix and spatial map analyses. For timeseries and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. PMID:24657355
Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F; Auerbach, Edward J; Douaud, Gwenaëlle; Sexton, Claire E; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L; Smith, Stephen M
2014-07-15
The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB's ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures was assessed using time series (amplitude and spectra), network matrix and spatial map analyses. For time series and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. Copyright © 2014 Elsevier Inc. All rights reserved.
Kim, D; Burge, J; Lane, T; Pearlson, G D; Kiehl, K A; Calhoun, V D
2008-10-01
We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge, J., Lane, T., Link, H., Qiu, S., Clark, V.P., 2007. Discrete dynamic Bayesian network analysis of fMRI data. Hum Brain Mapp.) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge, J., Lane, T., 2005. Learning Class-Discriminative Dynamic Bayesian Networks. Proceedings of the International Conference on Machine Learning, Bonn, Germany, pp. 97-104.). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, A., 1991. Probability, random variables, and stochastic processes. McGraw-Hill, New York.). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal, frontal, and cerebellar regions during an auditory paradigm.
PRA and Risk Informed Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernsen, Sidney A.; Simonen, Fredric A.; Balkey, Kenneth R.
2006-01-01
The Boiler and Pressure Vessel Code (BPVC) of the American Society of Mechanical Engineers (ASME) has introduced a risk based approach into Section XI that covers Rules for Inservice Inspection of Nuclear Power Plant Components. The risk based approach requires application of the probabilistic risk assessments (PRA). Because no industry consensus standard existed for PRAs, ASME has developed a standard to evaluate the quality level of an available PRA needed to support a given risk based application. The paper describes the PRA standard, Section XI application of PRAs, and plans for broader applications of PRAs to other ASME nuclear codesmore » and standards. The paper addresses several specific topics of interest to Section XI. Important consideration are special methods (surrogate components) used to overcome the lack of PRA treatments of passive components in PRAs. The approach allows calculations of conditional core damage probabilities both for component failures that cause initiating events and failures in standby systems that decrease the availability of these systems. The paper relates the explicit risk based methods of the new Section XI code cases to the implicit consideration of risk used in the development of Section XI. Other topics include the needed interactions of ISI engineers, plant operating staff, PRA specialists, and members of expert panels that review the risk based programs.« less
Judgmental Standard Setting Using a Cognitive Components Model.
ERIC Educational Resources Information Center
McGinty, Dixie; Neel, John H.
A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…
A precipitation regionalization and regime for Iran based on multivariate analysis
NASA Astrophysics Data System (ADS)
Raziei, Tayeb
2018-02-01
Monthly precipitation time series of 155 synoptic stations distributed over Iran, covering 1990-2014 time period, were used to identify areas with different precipitation time variability and regimes utilizing S-mode principal component analysis (PCA) and cluster analysis (CA) preceded by T-mode PCA, respectively. Taking into account the maximum loading values of the rotated components, the first approach revealed five sub-regions characterized by different precipitation time variability, while the second method delineated eight sub-regions featured with different precipitation regimes. The sub-regions identified by the two used methods, although partly overlapping, are different considering their areal extent and complement each other as they are useful for different purposes and applications. Northwestern Iran and the Caspian Sea area were found as the two most distinctive Iranian precipitation sub-regions considering both time variability and precipitation regime since they were well captured with relatively identical areas by the two used approaches. However, the areal extents of the other three sub-regions identified by the first approach were not coincident with the coverage of their counterpart sub-regions defined by the second approach. Results suggest that the precipitation sub-region identified by the two methods would not be necessarily the same, as the first method which accounts for the variance of the data grouped stations with similar temporal variability while the second one which considers a fixed climatology defined by the average over the period 1990-2014 clusters stations having a similar march of monthly precipitation.
Xu, M; Alrubaiee, M; Gayen, S K; Alfano, R R
2005-04-01
A new approach for optical imaging and localization of objects in turbid media that makes use of the independent component analysis (ICA) from information theory is demonstrated. Experimental arrangement realizes a multisource illumination of a turbid medium with embedded objects and a multidetector acquisition of transmitted light on the medium boundary. The resulting spatial diversity and multiple angular observations provide robust data for three-dimensional localization and characterization of absorbing and scattering inhomogeneities embedded in a turbid medium. ICA of the perturbations in the spatial intensity distribution on the medium boundary sorts out the embedded objects, and their locations are obtained from Green's function analysis based on any appropriate light propagation model. Imaging experiments were carried out on two highly scattering samples of thickness approximately 50 times the transport mean-free path of the respective medium. One turbid medium had two embedded absorptive objects, and the other had four scattering objects. An independent component separation of the signal, in conjunction with diffusive photon migration theory, was used to locate the embedded inhomogeneities. In both cases, improved lateral and axial localizations of the objects over the result obtained by use of common photon migration reconstruction algorithms were achieved. The approach is applicable to different medium geometries, can be used with any suitable photon propagation model, and is amenable to near-real-time imaging applications.
Progress Toward Efficient Laminar Flow Analysis and Design
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Campbell, Matthew L.; Streit, Thomas
2011-01-01
A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center. The overall approach consists of the loose coupling of a flow solver, a transition prediction method and a design module using shell scripts, along with interface modules to prepare the input for each method. This approach allows the user to select the flow solver and transition prediction module, as well as run mode for each code, based on the fidelity most compatible with the problem and available resources. The design module can be any method that designs to a specified target pressure distribution. In addition to the interface modules, two new components have been developed: 1) an efficient, empirical transition prediction module (MATTC) that provides n-factor growth distributions without requiring boundary layer information; and 2) an automated target pressure generation code (ATPG) that develops a target pressure distribution that meets a variety of flow and geometry constraints. The ATPG code also includes empirical estimates of several drag components to allow the optimization of the target pressure distribution. The current system has been developed for the design of subsonic and transonic airfoils and wings, but may be extendable to other speed ranges and components. Several analysis and design examples are included to demonstrate the current capabilities of the system.
Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani
2014-02-15
Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
Differential principal component analysis of ChIP-seq.
Ji, Hongkai; Li, Xia; Wang, Qian-fei; Ning, Yang
2013-04-23
We propose differential principal component analysis (dPCA) for analyzing multiple ChIP-sequencing datasets to identify differential protein-DNA interactions between two biological conditions. dPCA integrates unsupervised pattern discovery, dimension reduction, and statistical inference into a single framework. It uses a small number of principal components to summarize concisely the major multiprotein synergistic differential patterns between the two conditions. For each pattern, it detects and prioritizes differential genomic loci by comparing the between-condition differences with the within-condition variation among replicate samples. dPCA provides a unique tool for efficiently analyzing large amounts of ChIP-sequencing data to study dynamic changes of gene regulation across different biological conditions. We demonstrate this approach through analyses of differential chromatin patterns at transcription factor binding sites and promoters as well as allele-specific protein-DNA interactions.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.
1986-01-01
An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.
Common relationships among proximate composition components in fishes
Hartman, K.J.; Margraf, F.J.
2008-01-01
Relationships between the various body proximate components and dry matter content were examined for five species of fishes, representing anadromous, marine and freshwater species: chum salmon Oncorhynchus keta, Chinook salmon Oncorhynchus tshawytscha, brook trout Salvelinus fontinalis, bluefish Pomatomus saltatrix and striped bass Morone saxatilis. The dry matter content or per cent dry mass of these fishes can be used to reliably predict the per cent composition of the other components. Therefore, with validation it is possible to estimate fat, protein and ash content of fishes from per cent dry mass information, reducing the need for costly and time-consuming laboratory proximate analysis. This approach coupled with new methods of non-lethal estimation of per cent dry mass, such as from bioelectrical impedance analysis, can provide non-destructive measurements of proximate composition of fishes. ?? 2008 The Authors.
Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics
Belo, David; Gamboa, Hugo
2017-01-01
The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components. PMID:28831239
Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W
2018-01-01
Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
CONFIG: Integrated engineering of systems and their operation
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
Use of Multiscale Entropy to Facilitate Artifact Detection in Electroencephalographic Signals
Mariani, Sara; Borges, Ana F. T.; Henriques, Teresa; Goldberger, Ary L.; Costa, Madalena D.
2016-01-01
Electroencephalographic (EEG) signals present a myriad of challenges to analysis, beginning with the detection of artifacts. Prior approaches to noise detection have utilized multiple techniques, including visual methods, independent component analysis and wavelets. However, no single method is broadly accepted, inviting alternative ways to address this problem. Here, we introduce a novel approach based on a statistical physics method, multiscale entropy (MSE) analysis, which quantifies the complexity of a signal. We postulate that noise corrupted EEG signals have lower information content, and, therefore, reduced complexity compared with their noise free counterparts. We test the new method on an open-access database of EEG signals with and without added artifacts due to electrode motion. PMID:26738116
[The Construction of a Mental Health Component for the National Survey: NMHS 2015, Colombia].
de Santacruz, Cecilia; Torres, Nubia; Gómez-Restrepo, Carlos; Matallana, Diana; Borda, Juan Pablo
2016-12-01
Usually, mental health has been defined as the absence of mental disorders. In order to approach this concept in a plain sanitas way, it was considered necessary to crosslink a component for the National Mental Health Survey 2015 (NMHS; ENSM for its acronym in Spanish), that would respond to the specific orientation and particularities of the country. To describe the structure and contents of the mental health component of the NMHS 2015 for the Colombian population over 7 years of age. Review, documentary analysis and discussion regarding the concepts and tools with the team in charge of the NMHS and other groups. 353 documents were reviewed, and 180 were analyzed and discussed. The component model is presented, considering the ethic dimension of relationship care as a main element; it merges two inquiry dimensions or categories: subjective-relational, and social-collective. The structured mental health component provides information regarding the entire population. It also allows understanding and approaching the concept of mental health as a personal and collective "good life". Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Using Neural Networks for Sensor Validation
NASA Technical Reports Server (NTRS)
Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William
1998-01-01
This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.
Direct process estimation from tomographic data using artificial neural systems
NASA Astrophysics Data System (ADS)
Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.
2001-07-01
The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.
Developing a GIS for CO2 analysis using lightweight, open source components
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Kulawik, S. S.; Law, E.; Osterman, G. B.; Braverman, A.; Nguyen, H. M.; Mattmann, C. A.; Crichton, D. J.; Eldering, A.; Castano, R.; Gunson, M. R.
2012-12-01
There are advantages to approaching the realm of geographic information systems (GIS) using lightweight, open source components in place of a more traditional web map service (WMS) solution. Rapid prototyping, schema-less data storage, the flexible interchange of components, and open source community support are just some of the benefits. In our effort to develop an application supporting the geospatial and temporal rendering of remote sensing carbon-dioxide (CO2) data for the CO2 Virtual Science Data Environment project, we have connected heterogeneous open source components together to form a GIS. Utilizing widely popular open source components including the schema-less database MongoDB, Leaflet interactive maps, the HighCharts JavaScript graphing library, and Python Bottle web-services, we have constructed a system for rapidly visualizing CO2 data with reduced up-front development costs. These components can be aggregated together, resulting in a configurable stack capable of replicating features provided by more standard GIS technologies. The approach we have taken is not meant to replace the more established GIS solutions, but to instead offer a rapid way to provide GIS features early in the development of an application and to offer a path towards utilizing more capable GIS technology in the future.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
Comparative study of human blood Raman spectra and biochemical analysis of patients with cancer
NASA Astrophysics Data System (ADS)
Shamina, Lyudmila A.; Bratchenko, Ivan A.; Artemyev, Dmitry N.; Myakinin, Oleg O.; Moryatov, Alexander A.; Orlov, Andrey E.; Kozlov, Sergey V.; Zakharov, Valery P.
2018-04-01
In this study we measured spectral features of blood by Raman spectroscopy. Correlation of the obtained spectral data and biochemical studies results is investigated. Analysis of specific spectra allows for identification of informative spectral bands proportional to components whose content is associated with body fluids homeostasis changes at various pathological conditions. Regression analysis of the obtained spectral data allows for discriminating the lung cancer from other tumors with a posteriori probability of 88.3%. The potentiality of applying surface-enhanced Raman spectroscopy with utilized experimental setup for further studies of the body fluids component composition was estimated. The greatest signal amplification was achieved for the gold substrate with a surface roughness of 1 μm. In general, the developed approach of body fluids analysis provides the basis of a useful and minimally invasive method of pathologies screening.
Shape optimization of three-dimensional stamped and solid automotive components
NASA Technical Reports Server (NTRS)
Botkin, M. E.; Yang, R.-J.; Bennett, J. A.
1987-01-01
The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Matasci, Naim
2011-03-01
The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.
YÜCEL, MERYEM A.; SELB, JULIETTE; COOPER, ROBERT J.; BOAS, DAVID A.
2014-01-01
As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson’s correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson’s correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data. PMID:25360181
Use of a Principal Components Analysis for the Generation of Daily Time Series.
NASA Astrophysics Data System (ADS)
Dreveton, Christine; Guillou, Yann
2004-07-01
A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.
Song, Hui; Peng, Yingwei; Tu, Dongsheng
2017-04-01
Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.
Claus, Maren; Dychus, Nicole; Ebel, Melanie; Damaschke, Jürgen; Maydych, Viktoriya; Wolf, Oliver T; Kleinsorge, Thomas; Watzl, Carsten
2016-10-01
The immune system is essential to provide protection from infections and cancer. Disturbances in immune function can therefore directly affect the health of the affected individual. Many extrinsic and intrinsic factors such as exposure to chemicals, stress, nutrition and age have been reported to influence the immune system. These influences can affect various components of the immune system, and we are just beginning to understand the causalities of these changes. To investigate such disturbances, it is therefore essential to analyze the different components of the immune system in a comprehensive fashion. Here, we demonstrate such an approach which provides information about total number of leukocytes, detailed quantitative and qualitative changes in the composition of lymphocyte subsets, cytokine levels in serum and functional properties of T cells, NK cells and monocytes. Using samples from a cohort of 24 healthy volunteers, we demonstrate the feasibility of our approach to detect changes in immune functions.
Automated Environment Generation for Software Model Checking
NASA Technical Reports Server (NTRS)
Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.
2003-01-01
A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.
Cuthbertson, Daniel; Andrews, Preston K.; Reganold, John P.; Davies, Neal M.; Lange, B. Markus
2012-01-01
A gas chromatography–mass spectrometry approach was employed to evaluate the use of metabolite patterns to differentiate fruit from six commercially grown apple cultivars harvested in 2008. Principal component analysis (PCA) of apple fruit peel and flesh data indicated that individual cultivar replicates clustered together and were separated from all other cultivar samples. An independent metabolomics investigation with fruit harvested in 2003 confirmed the separate clustering of fruit from different cultivars. Further evidence for cultivar separation was obtained using a hierarchical clustering analysis. An evaluation of PCA component loadings revealed specific metabolite classes that contributed the most to each principal component, whereas a correlation analysis demonstrated that specific metabolites correlate directly with quality traits such as antioxidant activity, total phenolics, and total anthocyanins, which are important parameters in the selection of breeding germplasm. These data sets lay the foundation for elucidating the metabolic basis of commercially important fruit quality traits. PMID:22881116
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús
2017-04-01
Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Leucocyte classification for leukaemia detection using image processing techniques.
Putzu, Lorenzo; Caocci, Giovanni; Di Ruberto, Cecilia
2014-11-01
The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
A Direct Approach to In-Plane Stress Separation using Photoelastic Ptychography
NASA Astrophysics Data System (ADS)
Anthony, Nicholas; Cadenazzi, Guido; Kirkwood, Henry; Huwald, Eric; Nugent, Keith; Abbey, Brian
2016-08-01
The elastic properties of materials, either under external load or in a relaxed state, influence their mechanical behaviour. Conventional optical approaches based on techniques such as photoelasticity or thermoelasticity can be used for full-field analysis of the stress distribution within a specimen. The circular polariscope in combination with holographic photoelasticity allows the sum and difference of principal stress components to be determined by exploiting the temporary birefringent properties of materials under load. Phase stepping and interferometric techniques have been proposed as a method for separating the in-plane stress components in two-dimensional photoelasticity experiments. In this paper we describe and demonstrate an alternative approach based on photoelastic ptychography which is able to obtain quantitative stress information from far fewer measurements than is required for interferometric based approaches. The complex light intensity equations based on Jones calculus for this setup are derived. We then apply this approach to the problem of a disc under diametrical compression. The experimental results are validated against the analytical solution derived by Hertz for the theoretical displacement fields for an elastic disc subject to point loading.
A Concurrent Product-Development Approach for Friction-Stir Welded Vehicle-Underbody Structures
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Hariharan, A.; Pandurangan, B.
2012-04-01
High-strength aluminum and titanium alloys with superior blast/ballistic resistance against armor piercing (AP) threats and with high vehicle light-weighing potential are being increasingly used as military-vehicle armor. Due to the complex structure of these vehicles, they are commonly constructed through joining (mainly welding) of the individual components. Unfortunately, these alloys are not very amenable to conventional fusion-based welding technologies [e.g., gas metal arc welding (GMAW)] and to obtain high-quality welds, solid-state joining technologies such as friction-stir welding (FSW) have to be employed. However, since FSW is a relatively new and fairly complex joining technology, its introduction into advanced military-vehicle-underbody structures is not straight forward and entails a comprehensive multi-prong approach which addresses concurrently and interactively all the aspects associated with the components/vehicle-underbody design, fabrication, and testing. One such approach is developed and applied in this study. The approach consists of a number of well-defined steps taking place concurrently and relies on two-way interactions between various steps. The approach is critically assessed using a strengths, weaknesses, opportunities, and threats (SWOT) analysis.
A Direct Approach to In-Plane Stress Separation using Photoelastic Ptychography
Anthony, Nicholas; Cadenazzi, Guido; Kirkwood, Henry; Huwald, Eric; Nugent, Keith; Abbey, Brian
2016-01-01
The elastic properties of materials, either under external load or in a relaxed state, influence their mechanical behaviour. Conventional optical approaches based on techniques such as photoelasticity or thermoelasticity can be used for full-field analysis of the stress distribution within a specimen. The circular polariscope in combination with holographic photoelasticity allows the sum and difference of principal stress components to be determined by exploiting the temporary birefringent properties of materials under load. Phase stepping and interferometric techniques have been proposed as a method for separating the in-plane stress components in two-dimensional photoelasticity experiments. In this paper we describe and demonstrate an alternative approach based on photoelastic ptychography which is able to obtain quantitative stress information from far fewer measurements than is required for interferometric based approaches. The complex light intensity equations based on Jones calculus for this setup are derived. We then apply this approach to the problem of a disc under diametrical compression. The experimental results are validated against the analytical solution derived by Hertz for the theoretical displacement fields for an elastic disc subject to point loading. PMID:27488605
Temperament and problem solving in a population of adolescent guide dogs.
Bray, Emily E; Sammel, Mary D; Seyfarth, Robert M; Serpell, James A; Cheney, Dorothy L
2017-09-01
It is often assumed that measures of temperament within individuals are more correlated to one another than to measures of problem solving. However, the exact relationship between temperament and problem-solving tasks remains unclear because large-scale studies have typically focused on each independently. To explore this relationship, we tested 119 prospective adolescent guide dogs on a battery of 11 temperament and problem-solving tasks. We then summarized the data using both confirmatory factor analysis and exploratory principal components analysis. Results of confirmatory analysis revealed that a priori separation of tests as measuring either temperament or problem solving led to weak results, poor model fit, some construct validity, and no predictive validity. In contrast, results of exploratory analysis were best summarized by principal components that mixed temperament and problem-solving traits. These components had both construct and predictive validity (i.e., association with success in the guide dog training program). We conclude that there is complex interplay between tasks of "temperament" and "problem solving" and that the study of both together will be more informative than approaches that consider either in isolation.
Rapid test for the detection of hazardous microbiological material
NASA Astrophysics Data System (ADS)
Mordmueller, Mario; Bohling, Christian; John, Andreas; Schade, Wolfgang
2009-09-01
After attacks with anthrax pathogens have been committed since 2001 all over the world the fast detection and determination of biological samples has attracted interest. A very promising method for a rapid test is Laser Induced Breakdown Spectroscopy (LIBS). LIBS is an optical method which uses time-resolved or time-integrated spectral analysis of optical plasma emission after pulsed laser excitation. Even though LIBS is well established for the determination of metals and other inorganic materials the analysis of microbiological organisms is difficult due to their very similar stoichiometric composition. To analyze similar LIBS-spectra computer assisted chemometrics is a very useful approach. In this paper we report on first results of developing a compact and fully automated rapid test for the detection of hazardous microbiological material. Experiments have been carried out with two setups: A bulky one which is composed of standard laboratory components and a compact one consisting of miniaturized industrial components. Both setups work at an excitation wavelength of λ=1064nm (Nd:YAG). Data analysis is done by Principal Component Analysis (PCA) with an adjacent neural network for fully automated sample identification.
Artifact suppression and analysis of brain activities with electroencephalography signals.
Rashed-Al-Mahfuz, Md; Islam, Md Rabiul; Hirose, Keikichi; Molla, Md Khademul Islam
2013-06-05
Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculogram is a dominant artifact which has a significant negative influence on further analysis of real electroencephalography data. This paper presented a data adaptive technique for artifact suppression and brain wave extraction from electroencephalography signals to detect regional brain activities. Empirical mode decomposition based adaptive thresholding approach was employed here to suppress the electro-oculogram artifact. Fractional Gaussian noise was used to determine the threshold level derived from the analysis data without any training. The purified electroencephalography signal was composed of the brain waves also called rhythmic components which represent the brain activities. The rhythmic components were extracted from each electroencephalography channel using adaptive wiener filter with the original scale. The regional brain activities were mapped on the basis of the spatial distribution of rhythmic components, and the results showed that different regions of the brain are activated in response to different stimuli. This research analyzed the activities of a single rhythmic component, alpha with respect to different motor imaginations. The experimental results showed that the proposed method is very efficient in artifact suppression and identifying individual motor imagery based on the activities of alpha component.
Improving human activity recognition and its application in early stroke diagnosis.
Villar, José R; González, Silvia; Sedano, Javier; Chira, Camelia; Trejo-Gabriel-Galan, Jose M
2015-06-01
The development of efficient stroke-detection methods is of significant importance in today's society due to the effects and impact of stroke on health and economy worldwide. This study focuses on Human Activity Recognition (HAR), which is a key component in developing an early stroke-diagnosis tool. An overview of the proposed global approach able to discriminate normal resting from stroke-related paralysis is detailed. The main contributions include an extension of the Genetic Fuzzy Finite State Machine (GFFSM) method and a new hybrid feature selection (FS) algorithm involving Principal Component Analysis (PCA) and a voting scheme putting the cross-validation results together. Experimental results show that the proposed approach is a well-performing HAR tool that can be successfully embedded in devices.
Flight-test evaluation of civil helicopter terminal approach operations using differential GPS
NASA Technical Reports Server (NTRS)
Edwards, F. G.; Hegarty, D. M.
1989-01-01
A civil code differential Global Positioning System (DGPS) has been developed and flight-tested by the NASA Ames Research Center. The system was used to evaluate the performance of the DGPS for support of helicopter terminal approach operations. The airborne component of the DGPS was installed in a NASA helicopter. The ground-reference component was installed in a mobile van and equipped with a real-time VHF telemetry data link to transmit correction information to the aircraft system. An extensive series of tests was conducted to evaluate the performance of the system for several different configurations of the airborne navigation filter. This paper will describe the systems, the results of the flight tests, and the results of the posttest analysis.
Rat growth during chronic centrifugation
NASA Technical Reports Server (NTRS)
Pitts, G. C.; Oyama, J.
1978-01-01
Female weanling rats were chronically centrifuged at 4.15 G with controls at terrestrial gravity. Samples were sacrificed for body composition studies at 0, 28, 63, 105 and 308 days of centrifugation. The centrifuged group approached a significantly lower mature body mass than the controls (251 and 318g) but the rate of approach was the same in both groups. Retirement to 1G on the 60th day resulted in complete recovery. Among individual components muscle, bone, skin, CNS, heart, kidneys, body water and body fat were changed in the centrifuged group. However, an analysis of the growth of individual components relative to growth of the total fat-free compartment revealed that only skin (which increased in mass) was responding to centrifugation per se.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
A Nonlinear Model for Gene-Based Gene-Environment Interaction.
Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua
2016-06-04
A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Optical analysis of laser systems using interferometry
NASA Astrophysics Data System (ADS)
Viswanathan, V. K.; Liberman, I.; Lawrence, G.; Seery, B. D.
1980-06-01
It is noted that previous approaches of predicting focal spot parameters involved the digitization of interference patterns of the optical components and propagation of the complex amplitude and phase of the wave front throughout the system. The present paper describes an approach in which the computational procedure is extended to produce computer plots of the final emerging wave front. It is shown that this enables direct comparison with the experimentally produced wave front of the total system and makes possible the optical analysis, design, and possible optimization of laser systems. A description is given of the computational procedure and the Twyman-Green and Smartt IR interferometers constructed to verify this approach. Finally, consideration is given to the implications of the results.
Systems Analysis Approach for the NASA Environmentally Responsible Aviation Project
NASA Technical Reports Server (NTRS)
Kimmel, William M.
2011-01-01
This conference paper describes the current systems analysis approach being implemented for the Environmentally Responsible Aviation Project within the Integrated Systems Research Program under the NASA Aeronautics Research Mission Directorate. The scope and purpose of these systems studies are introduced followed by a methodology overview. The approach involves both top-down and bottoms-up components to provide NASA s stakeholders with a rationale for the prioritization and tracking of a portfolio of technologies which enable the future fleet of aircraft to operate with a simultaneous reduction of aviation noise, emissions and fuel-burn impacts to our environment. Examples of key current results and relevant decision support conclusions are presented along with a forecast of the planned analyses to follow.
A multiple technique approach to the analysis of urinary calculi.
Rodgers, A L; Nassimbeni, L R; Mulder, K J
1982-01-01
10 urinary calculi have been qualitatively and quantitatively analysed using X-ray diffraction, infra-red, scanning electron microscopy, X-ray fluorescence, atomic absorption and density gradient procedures. Constituents and compositional features which often go undetected due to limitations in the particular analytical procedure being used, have been identified and a detailed picture of each stone's composition and structure has been obtained. In all cases at least two components were detected suggesting that the multiple technique approach might cast some doubt as to the existence of "pure" stones. Evidence for a continuous, non-sequential deposition mechanism has been detected. In addition, the usefulness of each technique in the analysis of urinary stones has been assessed and the multiple technique approach has been evaluated as a whole.
Austin, Christine; Gennings, Chris; Tammimies, Kristiina; Bölte, Sven; Arora, Manish
2017-01-01
Environmental exposures to essential and toxic elements may alter health trajectories, depending on the timing, intensity, and mixture of exposures. In epidemiologic studies, these factors are typically analyzed as a function of elemental concentrations in biological matrices measured at one or more points in time. Such an approach, however, fails to account for the temporal cyclicity in the metabolism of environmental chemicals, which if perturbed may lead to adverse health outcomes. Here, we conceptualize and apply a non-linear method–recurrence quantification analysis (RQA)–to quantify cyclical components of prenatal and early postnatal exposure profiles for elements essential to normal development, including Zn, Mn, Mg, and Ca, and elements associated with deleterious health effects or narrow tolerance ranges, including Pb, As, and Cr. We found robust evidence of cyclical patterns in the metabolic profiles of nutrient elements, which we validated against randomized twin-surrogate time-series, and further found that nutrient dynamical properties differ from those of Cr, As, and Pb. Furthermore, we extended this approach to provide a novel method of quantifying dynamic interactions between two environmental exposures. To achieve this, we used cross-recurrence quantification analysis (CRQA), and found that elemental nutrient-nutrient interactions differed from those involving toxicants. These rhythmic regulatory interactions, which we characterize in two geographically distinct cohorts, have not previously been uncovered using traditional regression-based approaches, and may provide a critical unit of analysis for environmental and dietary exposures in epidemiological studies. PMID:29112980
Du, Yiyang; He, Bosai; Li, Qing; He, Jiao; Wang, Di; Bi, Kaishun
2017-07-01
Suan-Zao-Ren granule is widely used to treat insomnia in China. However, because of the complexity and diversity of the chemical compositions in traditional Chinese medicine formula, the comprehensive analysis of constituents in vitro and in vivo is rather difficult. In our study, an ultra high performance liquid chromatography with quadrupole time-of-flight mass spectrometry and the PeakView® software, which uses multiple data processing approaches including product ion filter, neutral loss filter, and mass defect filter, method was developed to characterize the ingredients and rat serum metabolites in Suan-Zao-Ren granule. A total of 101 constituents were detected in vitro. Under the same analysis conditions, 68 constituents were characterized in rat serum, including 35 prototype components and 33 metabolites. The metabolic pathways of main components were also illustrated. Among them, the metabolic pathways of timosaponin AI were firstly revealed. The bioactive compounds mainly underwent the phase I metabolic pathways including hydroxylation, oxidation, hydrolysis, and phase II metabolic pathways including sulfate conjugation, glucuronide conjugation, cysteine conjugation, acetycysteine conjugation, and glutathione conjugation. In conclusion, our results showed that this analysis approach was extremely useful for the in-depth pharmacological research of Suan-Zao-Ren granule and provided a chemical basis for its rational. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2011-01-01
Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and installation of the current method provides clinicians and researchers a low cost solution to monitor the progression of and the treatment to PD. In summary, the proposed method provides an alternative to perform gait analysis for patients with PD. PMID:22074315
ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT
The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...
Study of advanced techniques for determining the long term performance of components
NASA Technical Reports Server (NTRS)
1973-01-01
The application of existing and new technology to the problem of determining the long-term performance capability of liquid rocket propulsion feed systems is discussed. The long term performance of metal to metal valve seats in a liquid propellant fuel system is stressed. The approaches taken in conducting the analysis are: (1) advancing the technology of characterizing components through the development of new or more sensitive techniques and (2) improving the understanding of the physical of degradation.
2017-06-30
along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure
On the dispersion relations for an inhomogeneous waveguide with attenuation
NASA Astrophysics Data System (ADS)
Vatul'yan, A. O.; Yurlov, V. O.
2016-09-01
Some general laws concerning the structure of dispersion relations for solid inhomogeneous waveguides with attenuation are studied. An approach based on the analysis of a first-order matrix differential equation is presented in the framework of the concept of complex moduli. Some laws concerning the structure of components of the dispersion set for a viscoelastic inhomogeneous cylindrical waveguide are studied analytically and numerically, and the asymptotics of components of the dispersion set are constructed for arbitrary inhomogeneity laws in the low-frequency region.
2008-10-01
and UTCHEM (Clement et al., 1998). While all four of these software packages use conservation of mass as the basic principle for tracking NAPL...simulate dissolution of a single NAPL component. UTCHEM can be used to simulate dissolution of a multiple NAPL components using either linear or first...parameters. No UTCHEM a/ 3D model, general purpose NAPL simulator. Yes Virulo a/ Probabilistic model for predicting leaching of viruses in unsaturated
Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation.
Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan
2016-06-01
Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.
NASA Technical Reports Server (NTRS)
Pendley, R. D.; Scheidker, E. J.; Levitt, D. S.; Myers, C. R.; Werking, R. D.
1994-01-01
This analysis defines a complete set of ground support functions based on those practiced in real space flight operations during the on-orbit phase of a mission. These functions are mapped against ground support functions currently in use by NASA and DOD. Software components to provide these functions can be hosted on RISC-based work stations and integrated to provide a modular, integrated ground support system. Such modular systems can be configured to provide as much ground support functionality as desired. This approach to ground systems has been widely proposed and prototyped both by government institutions and commercial vendors. The combined set of ground support functions we describe can be used as a standard to evaluate candidate ground systems. This approach has also been used to develop a prototype of a modular, loosely-integrated ground support system, which is discussed briefly. A crucial benefit to a potential user is that all the components are flight-qualified, thus giving high confidence in their accuracy and reliability.
NASA Astrophysics Data System (ADS)
Pendley, R. D.; Scheidker, E. J.; Levitt, D. S.; Myers, C. R.; Werking, R. D.
1994-11-01
This analysis defines a complete set of ground support functions based on those practiced in real space flight operations during the on-orbit phase of a mission. These functions are mapped against ground support functions currently in use by NASA and DOD. Software components to provide these functions can be hosted on RISC-based work stations and integrated to provide a modular, integrated ground support system. Such modular systems can be configured to provide as much ground support functionality as desired. This approach to ground systems has been widely proposed and prototyped both by government institutions and commercial vendors. The combined set of ground support functions we describe can be used as a standard to evaluate candidate ground systems. This approach has also been used to develop a prototype of a modular, loosely-integrated ground support system, which is discussed briefly. A crucial benefit to a potential user is that all the components are flight-qualified, thus giving high confidence in their accuracy and reliability.
Active Metal Brazing and Adhesive Bonding of Titanium to C/C Composites for Heat Rejection System
NASA Technical Reports Server (NTRS)
Singh, M.; Shpargel, Tarah; Cerny, Jennifer
2006-01-01
Robust assembly and integration technologies are critically needed for the manufacturing of heat rejection system (HRS) components for current and future space exploration missions. Active metal brazing and adhesive bonding technologies are being assessed for the bonding of titanium to high conductivity Carbon-Carbon composite sub components in various shapes and sizes. Currently a number of different silver and copper based active metal brazes and adhesive compositions are being evaluated. The joint microstructures were examined using optical microscopy, and scanning electron microscopy (SEM) coupled with energy dispersive spectrometry (EDS). Several mechanical tests have been employed to ascertain the effectiveness of different brazing and adhesive approaches in tension and in shear that are both simple and representative of the actual system and relatively straightforward in analysis. The results of these mechanical tests along with the fractographic analysis will be discussed. In addition, advantages, technical issues and concerns in using different bonding approaches will also be presented.
Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds
NASA Technical Reports Server (NTRS)
Boppe, Charles W.
1987-01-01
A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.
Economic efficiency of environmental management system operation in industrial companies
NASA Astrophysics Data System (ADS)
Dukmasova, N.; Ershova, I.; Plastinina, I.; Boyarinov, A.
2017-06-01
The article examines the issue of the efficiency of the environmental management system (EMS) implementation in the Russian machine-building companies. The analysis showed that Russia clearly lags behind other developed and developing countries in terms of the number of ISO 14001 certified companies. According to the authors, the main cause of weak system implementation activity is attributed to the lack of interest in ISO 14001 certification on the Russian market. Five-year primary (field) research aimed at the analysis of the environmental priorities of the civilians suggests that the image component of the economic benefits ensures the increase in economic and financial performance of the company due to the increase in customers’ loyalty to the products of the EMS adopter. To quantify economic benefits obtained from EMS implementation, a methodological approach with regard to the image component and the decrease in semi-fixed costs due to the increase in the production scale has been developed. This approach has been tested in a machine-building electrical equipment manufacturer in Ekaterinburg. This approach applied to data processing yields the conclusion that EMS gives a good additional competitive advantage to its adopters.
Gender and education impact on brain aging: a general cognitive factor approach.
Proust-Lima, Cécile; Amieva, Hélène; Letenneur, Luc; Orgogozo, Jean-Marc; Jacqmin-Gadda, Hélène; Dartigues, Jean-François
2008-09-01
In cognitive aging research, the study of a general cognitive factor has been shown to have a substantial explanatory power over the study of isolated tests. The authors aimed at differentiating the impact of gender and education on global cognitive change with age from their differential impact on 4 psychometric tests using a new latent process approach, which intermediates between a single-factor longitudinal model for sum scores and an item-response theory approach for longitudinal data. The analysis was conducted on a sample of 2,228 subjects from PAQUID, a population-based cohort of older adults followed for 13 years with repeated measures of cognition. Adjusted for vascular factors, the analysis confirmed that women performed better in tests involving verbal components, while men performed better in tests involving visuospatial skills. In addition, the model suggested that women had a slightly steeper global cognitive decline with oldest age than men, even after excluding incident dementia or death. Subjects with higher education exhibited a better mean score for the 4 tests, but this difference tended to attenuate with age for tests involving a speed component. (c) 2008 APA, all rights reserved
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
2017-10-13
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
NASA Technical Reports Server (NTRS)
Fayssal, Safie; Weldon, Danny
2008-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2013-12-01
Hydrologic modeling often requires the re-use and integration of models from different disciplines to simulate complex environmental systems. Component-based modeling introduces a flexible approach for integrating physical-based processes across disciplinary boundaries. Several hydrologic-related modeling communities have adopted the component-based approach for simulating complex physical systems by integrating model components across disciplinary boundaries in a workflow. However, it is not always straightforward to create these interdisciplinary models due to the lack of sufficient knowledge about a hydrologic process. This shortcoming is a result of using informal methods for organizing and sharing information about a hydrologic process. A knowledge-based ontology provides such standards and is considered the ideal approach for overcoming this challenge. The aims of this research are to present the methodology used in analyzing the basic hydrologic domain in order to identify hydrologic processes, the ontology itself, and how the proposed ontology is integrated with the Water Resources Component (WRC) ontology. The proposed ontology standardizes the definitions of a hydrologic process, the relationships between hydrologic processes, and their associated scientific equations. The objective of the proposed Hydrologic Process (HP) Ontology is to advance the idea of creating a unified knowledge framework for components' metadata by introducing a domain-level ontology for hydrologic processes. The HP ontology is a step toward an explicit and robust domain knowledge framework that can be evolved through the contribution of domain users. Analysis of the hydrologic domain is accomplished using the Formal Concept Approach (FCA), in which the infiltration process, an important hydrologic process, is examined. Two infiltration methods, the Green-Ampt and Philip's methods, were used to demonstrate the implementation of information in the HP ontology. Furthermore, a SPARQL service is provided for semantic-based querying of the ontology.
Feed-forward control of gear mesh vibration using piezoelectric actuators
NASA Technical Reports Server (NTRS)
Montague, Gerald T.; Kascak, Albert F.; Palazzolo, Alan; Manchala, Daniel; Thomas, Erwin
1994-01-01
This paper presents a novel means for suppressing gear mesh-related vibrations. The key components in this approach are piezoelectric actuators and a high-frequency, analog feed-forward controller. Test results are presented and show up to a 70-percent reduction in gear mesh acceleration and vibration control up to 4500 Hz. The principle of the approach is explained by an analysis of a harmonically excited, general linear vibratory system.
Automated quantitative cytological analysis using portable microfluidic microscopy.
Jagannadh, Veerendra Kalyan; Murthy, Rashmi Sreeramachandra; Srinivasan, Rajesh; Gorthi, Sai Siva
2016-06-01
In this article, a portable microfluidic microscopy based approach for automated cytological investigations is presented. Inexpensive optical and electronic components have been used to construct a simple microfluidic microscopy system. In contrast to the conventional slide-based methods, the presented method employs microfluidics to enable automated sample handling and image acquisition. The approach involves the use of simple in-suspension staining and automated image acquisition to enable quantitative cytological analysis of samples. The applicability of the presented approach to research in cellular biology is shown by performing an automated cell viability assessment on a given population of yeast cells. Further, the relevance of the presented approach to clinical diagnosis and prognosis has been demonstrated by performing detection and differential assessment of malaria infection in a given sample. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Abdel, Matthew P; von Roth, Philipp; Jennings, Matthew T; Hanssen, Arlen D; Pagnano, Mark W
2016-02-01
Numerous factors influence total hip arthroplasty (THA) stability including surgical approach and soft tissue tension, patient compliance, and component position. One long-held tenet regarding component position is that cup inclination and anteversion of 40° ± 10° and 15° ± 10°, respectively, represent a "safe zone" as defined by Lewinnek that minimizes dislocation after primary THA; however, it is clear that components positioned in this zone can and do dislocate. We sought to determine if these classic radiographic targets for cup inclination and anteversion accurately predicted a safe zone limiting dislocation in a contemporary THA practice. From a cohort of 9784 primary THAs performed between 2003 and 2012 at one institution, we retrospectively identified 206 THAs (2%) that subsequently dislocated. Radiographic parameters including inclination, anteversion, center of rotation, and limb length discrepancy were analyzed. Mean followup was 27 months (range, 0-133 months). The majority (58% [120 of 206]) of dislocated THAs had a socket within the Lewinnek safe zone. Mean cup inclination was 44° ± 8° with 84% within the safe zone for inclination. Mean anteversion was 15° ± 9° with 69% within the safe zone for anteversion. Sixty-five percent of dislocated THAs that were performed through a posterior approach had an acetabular component within the combined acetabular safe zones, whereas this was true for only 33% performed through an anterolateral approach. An acetabular component performed through a posterior approach was three times as likely to be within the combined acetabular safe zones (odds ratio [OR], 1.3; 95% confidence interval [CI], 1.1-1.6) than after an anterolateral approach (OR, 0.4; 95% CI, 0.2-0.7; p < 0.0001). In contrast, acetabular components performed through a posterior approach (OR, 1.6; 95% CI, 1.2-1.9) had an increased risk of dislocation compared with those performed through an anterolateral approach (OR, 0.8; 95% CI, 0.7-0.9; p < 0.0001). The historical target values for cup inclination and anteversion may be useful but should not be considered a safe zone given that the majority of these contemporary THAs that dislocated were within those target values. Stability is likely multifactorial; the ideal cup position for some patients may lie outside the Lewinnek safe zone and more advanced analysis is required to identify the right target in that subgroup. Level III, therapeutic study.
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.
2007-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.
Evans, Natalie; Meñaca, Arantza; Koffman, Jonathan; Harding, Richard; Higginson, Irene J; Pool, Robert; Gysels, Marjolein
2012-07-01
Cultural competency is increasingly recommended in policy and practice to improve end-of-life (EoL) care for minority ethnic groups in multicultural societies. It is imperative to critically analyze this approach to understand its underlying concepts. Our aim was to appraise cultural competency approaches described in the British literature on EoL care and minority ethnic groups. This is a critical review. Articles on cultural competency were identified from a systematic review of the literature on minority ethnic groups and EoL care in the United Kingdom. Terms, definitions, and conceptual models of cultural competency approaches were identified and situated according to purpose, components, and origin. Content analysis of definitions and models was carried out to identify key components. One-hundred thirteen articles on minority ethnic groups and EoL care in the United Kingdom were identified. Over half (n=60) contained a term, definition, or model for cultural competency. In all, 17 terms, 17 definitions, and 8 models were identified. The most frequently used term was "culturally sensitive," though "cultural competence" was defined more often. Definitions contained one or more of the components: "cognitive," "implementation," or "outcome." Models were categorized for teaching or use in patient assessment. Approaches were predominantly of American origin. The variety of terms, definitions, and models underpinning cultural competency approaches demonstrates a lack of conceptual clarity, and potentially complicates implementation. Further research is needed to compare the use of cultural competency approaches in diverse cultures and settings, and to assess the impact of such approaches on patient outcomes.
McNab, Duncan; Bowie, Paul; Morrison, Jill; Ross, Alastair
2016-11-01
Participation in projects to improve patient safety is a key component of general practice (GP) specialty training, appraisal and revalidation. Patient safety training priorities for GPs at all career stages are described in the Royal College of General Practitioners' curriculum. Current methods that are taught and employed to improve safety often use a 'find-and-fix' approach to identify components of a system (including humans) where performance could be improved. However, the complex interactions and inter-dependence between components in healthcare systems mean that cause and effect are not always linked in a predictable manner. The Safety-II approach has been proposed as a new way to understand how safety is achieved in complex systems that may improve quality and safety initiatives and enhance GP and trainee curriculum coverage. Safety-II aims to maximise the number of events with a successful outcome by exploring everyday work. Work-as-done often differs from work-as-imagined in protocols and guidelines and various ways to achieve success, dependent on work conditions, may be possible. Traditional approaches to improve the quality and safety of care often aim to constrain variability but understanding and managing variability may be a more beneficial approach. The application of a Safety-II approach to incident investigation, quality improvement projects, prospective analysis of risk in systems and performance indicators may offer improved insight into system performance leading to more effective change. The way forward may be to combine the Safety-II approach with 'traditional' methods to enhance patient safety training, outcomes and curriculum coverage.
2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.
Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen
2017-09-19
A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.
Geotechnical approaches to coal ash content control in mining of complex structure deposits
NASA Astrophysics Data System (ADS)
Batugin, SA; Gavrilov, VL; Khoyutanov, EA
2017-02-01
Coal deposits having complex structure and nonuniform quality coal reserves require improved processes of production quality control. The paper proposes a method to present coal ash content as components of natural and technological dilution. It is chosen to carry out studies on the western site of Elginsk coal deposit, composed of four coal beds of complex structure. The reported estimates of coal ash content in the beds with respect to five components point at the need to account for such data in confirmation exploration, mine planning and actual mining. Basic means of analysis and control of overall ash content and its components are discussed.
Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry
2005-10-06
The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.
Modelling safety of multistate systems with ageing components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna
An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics ofmore » the consecutive “m out of n: F” is presented as well.« less
Modular thought in the circuit analysis
NASA Astrophysics Data System (ADS)
Wang, Feng
2018-04-01
Applied to solve the problem of modular thought, provides a whole for simplification's method, the complex problems have become of, and the study of circuit is similar to the above problems: the complex connection between components, make the whole circuit topic solution seems to be more complex, and actually components the connection between the have rules to follow, this article mainly tells the story of study on the application of the circuit modular thought. First of all, this paper introduces the definition of two-terminal network and the concept of two-terminal network equivalent conversion, then summarizes the common source resistance hybrid network modular approach, containing controlled source network modular processing method, lists the common module, typical examples analysis.
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Weston, Sara J; Cox, Keith S; Condon, David M; Jackson, Joshua J
2016-10-01
The majority of life narrative research is performed using trained human coders. In contrast, automated linguistic analysis is oft employed in the study of verbal behaviors. These two methodological approaches are directly compared to determine the utility of automated linguistic analysis for the study of life narratives. In a study of in-person interviews (N = 158) and a second study of life stories collected online (N = 242), redemption scores are compared to the output of the Linguistic Inquiry and Word Count (Pennebaker, Francis & Booth, 2001). Additionally, patterns of language are found using exploratory principal components analysis. In both studies, redemption scores are modestly correlated with some LIWC categories and unassociated with the components. Patterns of language do not replicate across samples, indicating that the structure of language does not extend to a broader population. Redemption scores and linguistic components are independent predictors of life satisfaction up to 3 years later. These studies converge on the finding that human-coded redemption and automated linguistic analysis are complementary and nonredundant methods of analyzing life narratives, and considerations for the study of life narratives are discussed. © 2015 Wiley Periodicals, Inc.
Modeling vertebrate diversity in Oregon using satellite imagery
NASA Astrophysics Data System (ADS)
Cablk, Mary Elizabeth
Vertebrate diversity was modeled for the state of Oregon using a parametric approach to regression tree analysis. This exploratory data analysis effectively modeled the non-linear relationships between vertebrate richness and phenology, terrain, and climate. Phenology was derived from time-series NOAA-AVHRR satellite imagery for the year 1992 using two methods: principal component analysis and derivation of EROS data center greenness metrics. These two measures of spatial and temporal vegetation condition incorporated the critical temporal element in this analysis. The first three principal components were shown to contain spatial and temporal information about the landscape and discriminated phenologically distinct regions in Oregon. Principal components 2 and 3, 6 greenness metrics, elevation, slope, aspect, annual precipitation, and annual seasonal temperature difference were investigated as correlates to amphibians, birds, all vertebrates, reptiles, and mammals. Variation explained for each regression tree by taxa were: amphibians (91%), birds (67%), all vertebrates (66%), reptiles (57%), and mammals (55%). Spatial statistics were used to quantify the pattern of each taxa and assess validity of resulting predictions from regression tree models. Regression tree analysis was relatively robust against spatial autocorrelation in the response data and graphical results indicated models were well fit to the data.
Desdouits, Nathan; Nilges, Michael; Blondel, Arnaud
2015-02-01
Protein conformation has been recognized as the key feature determining biological function, as it determines the position of the essential groups specifically interacting with substrates. Hence, the shape of the cavities or grooves at the protein surface appears to drive those functions. However, only a few studies describe the geometrical evolution of protein cavities during molecular dynamics simulations (MD), usually with a crude representation. To unveil the dynamics of cavity geometry evolution, we developed an approach combining cavity detection and Principal Component Analysis (PCA). This approach was applied to four systems subjected to MD (lysozyme, sperm whale myoglobin, Dengue envelope protein and EF-CaM complex). PCA on cavities allows us to perform efficient analysis and classification of the geometry diversity explored by a cavity. Additionally, it reveals correlations between the evolutions of the cavities and structures, and can even suggest how to modify the protein conformation to induce a given cavity geometry. It also helps to perform fast and consensual clustering of conformations according to cavity geometry. Finally, using this approach, we show that both carbon monoxide (CO) location and transfer among the different xenon sites of myoglobin are correlated with few cavity evolution modes of high amplitude. This correlation illustrates the link between ligand diffusion and the dynamic network of internal cavities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
How does spatial extent of fMRI datasets affect independent component analysis decomposition?
Aragri, Adriana; Scarabino, Tommaso; Seifritz, Erich; Comani, Silvia; Cirillo, Sossio; Tedeschi, Gioacchino; Esposito, Fabrizio; Di Salle, Francesco
2006-09-01
Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity. (c) 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Kong, Xianyu; Liu, Yanfang; Jian, Huimin; Su, Rongguo; Yao, Qingzhen; Shi, Xiaoyong
2017-10-01
To realize potential cost savings in coastal monitoring programs and provide timely advice for marine management, there is an urgent need for efficient evaluation tools based on easily measured variables for the rapid and timely assessment of estuarine and offshore eutrophication. In this study, using parallel factor analysis (PARAFAC), principal component analysis (PCA), and discriminant function analysis (DFA) with the trophic index (TRIX) for reference, we developed an approach for rapidly assessing the eutrophication status of coastal waters using easy-to-measure parameters, including chromophoric dissolved organic matter (CDOM), fluorescence excitation-emission matrices, CDOM UV-Vis absorbance, and other water-quality parameters (turbidity, chlorophyll a, and dissolved oxygen). First, we decomposed CDOM excitation-emission matrices (EEMs) by PARAFAC to identify three components. Then, we applied PCA to simplify the complexity of the relationships between the water-quality parameters. Finally, we used the PCA score values as independent variables in DFA to develop a eutrophication assessment model. The developed model yielded classification accuracy rates of 97.1%, 80.5%, 90.3%, and 89.1% for good, moderate, and poor water qualities, and for the overall data sets, respectively. Our results suggest that these easy-to-measure parameters could be used to develop a simple approach for rapid in-situ assessment and monitoring of the eutrophication of estuarine and offshore areas.
Dynamical modeling and analysis of large cellular regulatory networks
NASA Astrophysics Data System (ADS)
Bérenguier, D.; Chaouiya, C.; Monteiro, P. T.; Naldi, A.; Remy, E.; Thieffry, D.; Tichit, L.
2013-06-01
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Kumar, Keshav
2017-11-01
Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.
The Productivity Analysis of Chennai Automotive Industry Cluster
NASA Astrophysics Data System (ADS)
Bhaskaran, E.
2014-07-01
Chennai, also called the Detroit of India, is India's second fastest growing auto market and exports auto components and vehicles to US, Germany, Japan and Brazil. For inclusive growth and sustainable development, 250 auto component industries in Ambattur, Thirumalisai and Thirumudivakkam Industrial Estates located in Chennai have adopted the Cluster Development Approach called Automotive Component Cluster. The objective is to study the Value Chain, Correlation and Data Envelopment Analysis by determining technical efficiency, peer weights, input and output slacks of 100 auto component industries in three estates. The methodology adopted is using Data Envelopment Analysis of Output Oriented Banker Charnes Cooper model by taking net worth, fixed assets, employment as inputs and gross output as outputs. The non-zero represents the weights for efficient clusters. The higher slack obtained reveals the excess net worth, fixed assets, employment and shortage in gross output. To conclude, the variables are highly correlated and the inefficient industries should increase their gross output or decrease the fixed assets or employment. Moreover for sustainable development, the cluster should strengthen infrastructure, technology, procurement, production and marketing interrelationships to decrease costs and to increase productivity and efficiency to compete in the indigenous and export market.
Buslaev, Pavel; Gordeliy, Valentin; Grudinin, Sergei; Gushchin, Ivan
2016-03-08
Molecular dynamics simulations of lipid bilayers are ubiquitous nowadays. Usually, either global properties of the bilayer or some particular characteristics of each lipid molecule are evaluated in such simulations, but the structural properties of the molecules as a whole are rarely studied. Here, we show how a comprehensive quantitative description of conformational space and dynamics of a single lipid molecule can be achieved via the principal component analysis (PCA). We illustrate the approach by analyzing and comparing simulations of DOPC bilayers obtained using eight different force fields: all-atom generalized AMBER, CHARMM27, CHARMM36, Lipid14, and Slipids and united-atom Berger, GROMOS43A1-S3, and GROMOS54A7. Similarly to proteins, most of the structural variance of a lipid molecule can be described by only a few principal components. These major components are similar in different simulations, although there are notable distinctions between the older and newer force fields and between the all-atom and united-atom force fields. The DOPC molecules in the simulations generally equilibrate on the time scales of tens to hundreds of nanoseconds. The equilibration is the slowest in the GAFF simulation and the fastest in the Slipids simulation. Somewhat unexpectedly, the equilibration in the united-atom force fields is generally slower than in the all-atom force fields. Overall, there is a clear separation between the more variable previous generation force fields and significantly more similar new generation force fields (CHARMM36, Lipid14, Slipids). We expect that the presented approaches will be useful for quantitative analysis of conformations and dynamics of individual lipid molecules in other simulations of lipid bilayers.
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
Transcriptomics provides unique solutions for understanding the impact of complex mixtures and their components on aquatic systems. Here we describe the application of transcriptomics analysis of in situ fathead minnow exposures for assessing biological impacts of wastewater trea...
Teaching Analytical Method Development in an Undergraduate Instrumental Analysis Course
ERIC Educational Resources Information Center
Lanigan, Katherine C.
2008-01-01
Method development and assessment, central components of carrying out chemical research, require problem-solving skills. This article describes a pedagogical approach for teaching these skills through the adaptation of published experiments and application of group-meeting style discussions to the curriculum of an undergraduate instrumental…
Multiple Hypnotizabilities: Differentiating the Building Blocks of Hypnotic Response
ERIC Educational Resources Information Center
Woody, Erik Z.; Barnier, Amanda J.; McConkey, Kevin M.
2005-01-01
Although hypnotizability can be conceptualized as involving component subskills, standard measures do not differentiate them from a more general unitary trait, partly because the measures include limited sets of dichotomous items. To overcome this, the authors applied full-information factor analysis, a sophisticated analytic approach for…
Sousa, João Carlos; Costa, Manuel João; Palha, Joana Almeida
2010-03-01
The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure. Internet-available resources can bridge the division between the molecular details and ECM's biological properties and associated processes. This article presents an approach to teach the ECM developed for first year medical undergraduates who, working in teams: (i) Explore a specific molecular component of the matrix, (ii) identify a disease in which the component is implicated, (iii) investigate how the component's structure/function contributes to ECM' supramolecular organization in physiological and in pathological conditions, and (iv) share their findings with colleagues. The approach-designated i-cell-MATRIX-is focused on the contribution of individual components to the overall organization and biological functions of the ECM. i-cell-MATRIX is student centered and uses 5 hours of class time. Summary of results and take home message: A "1-minute paper" has been used to gather student feedback on the impact of i-cell-MATRIX. Qualitative analysis of student feedback gathered in three consecutive years revealed that students appreciate the approach's reliance on self-directed learning, the interactivity embedded and the demand for deeper insights on the ECM. Learning how to use internet biomedical resources is another positive outcome. Ninety percent of students recommend the activity for subsequent years. i-cell-MATRIX is adaptable by other medical schools which may be looking for an approach that achieves higher student engagement with the ECM. Copyright © 2010 International Union of Biochemistry and Molecular Biology, Inc.
Clustering analysis strategies for electron energy loss spectroscopy (EELS).
Torruella, Pau; Estrader, Marta; López-Ortega, Alberto; Baró, Maria Dolors; Varela, Maria; Peiró, Francesca; Estradé, Sònia
2018-02-01
In this work, the use of cluster analysis algorithms, widely applied in the field of big data, is proposed to explore and analyze electron energy loss spectroscopy (EELS) data sets. Three different data clustering approaches have been tested both with simulated and experimental data from Fe 3 O 4 /Mn 3 O 4 core/shell nanoparticles. The first method consists on applying data clustering directly to the acquired spectra. A second approach is to analyze spectral variance with principal component analysis (PCA) within a given data cluster. Lastly, data clustering on PCA score maps is discussed. The advantages and requirements of each approach are studied. Results demonstrate how clustering is able to recover compositional and oxidation state information from EELS data with minimal user input, giving great prospects for its usage in EEL spectroscopy. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hopkins, D. A.
1984-01-01
A unique upward-integrated top-down-structured approach is presented for nonlinear analysis of high-temperature multilayered fiber composite structures. Based on this approach, a special purpose computer code was developed (nonlinear COBSTRAN) which is specifically tailored for the nonlinear analysis of tungsten-fiber-reinforced superalloy (TFRS) composite turbine blade/vane components of gas turbine engines. Special features of this computational capability include accounting of; micro- and macro-heterogeneity, nonlinear (stess-temperature-time dependent) and anisotropic material behavior, and fiber degradation. A demonstration problem is presented to mainfest the utility of the upward-integrated top-down-structured approach, in general, and to illustrate the present capability represented by the nonlinear COBSTRAN code. Preliminary results indicate that nonlinear COBSTRAN provides the means for relating the local nonlinear and anisotropic material behavior of the composite constituents to the global response of the turbine blade/vane structure.
NASA Astrophysics Data System (ADS)
Popov, V. N.; Botygin, I. A.; Kolochev, A. S.
2017-01-01
The approach allows representing data of international codes for exchange of meteorological information using metadescription as the formalism associated with certain categories of resources. Development of metadata components was based on an analysis of the data of surface meteorological observations, atmosphere vertical sounding, atmosphere wind sounding, weather radar observing, observations from satellites and others. A common set of metadata components was formed including classes, divisions and groups for a generalized description of the meteorological data. The structure and content of the main components of a generalized metadescription are presented in detail by the example of representation of meteorological observations from land and sea stations. The functional structure of a distributed computing system is described. It allows organizing the storage of large volumes of meteorological data for their further processing in the solution of problems of the analysis and forecasting of climatic processes.
Ma, Yibao; Zhao, Yong; Zhao, Ruiming; Zhang, Weiping; He, Yawen; Wu, Yingliang; Cao, Zhijian; Guo, Lin; Li, Wenxin
2010-07-01
Scorpion venoms contain a vast untapped reservoir of natural products, which have the potential for medicinal value in drug discovery. In this study, toxin components from the scorpion Heterometrus petersii venom were evaluated by transcriptome and proteome analysis.Ten known families of venom peptides and proteins were identified, which include: two families of potassium channel toxins, four families of antimicrobial and cytolytic peptides,and one family from each of the calcium channel toxins, La1-like peptides, phospholipase A2,and the serine proteases. In addition, we also identified 12 atypical families, which include the acid phosphatases, diuretic peptides, and ten orphan families. From the data presented here, the extreme diversity and convergence of toxic components in scorpion venom was uncovered. Our work demonstrates the power of combining transcriptomic and proteomic approaches in the study of animal venoms.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Economic sustainability assessment in semi-steppe rangelands.
Mofidi Chelan, Morteza; Alijanpour, Ahmad; Barani, Hossein; Motamedi, Javad; Azadi, Hossein; Van Passel, Steven
2018-05-08
This study was conducted to determine indices and components of economic sustainability assessment in the pastoral units of Sahand summer rangelands. The method was based on descriptive-analytical survey (experts and researchers) with questionnaires. Analysis of variance showed that the mean values of economic components are significantly different from each other and the efficiency component has the highest mean value (0.57). The analysis of rangeland pastoral units with the technique for order-preference by similarity to ideal solution (TOPSIS) indicated that from an economic sustainability standpoint, Garehgol (Ci = 0.519) and Badir Khan (Ci = 0.129), pastoral units ranked first and last, respectively. This study provides a clear understanding of existing resources and opportunities for policy makers that is crucial to approach economic sustainable development. Accordingly, this study can help better define sustainable development goals and monitor the progress of achieving them. Copyright © 2018 Elsevier B.V. All rights reserved.
Govindan, R B; Kota, Srinivas; Al-Shargabi, Tareq; Massaro, An N; Chang, Taeun; du Plessis, Adre
2016-09-01
Electroencephalogram (EEG) signals are often contaminated by the electrocardiogram (ECG) interference, which affects quantitative characterization of EEG. We propose null-coherence, a frequency-based approach, to attenuate the ECG interference in EEG using simultaneously recorded ECG as a reference signal. After validating the proposed approach using numerically simulated data, we apply this approach to EEG recorded from six newborns receiving therapeutic hypothermia for neonatal encephalopathy. We compare our approach with an independent component analysis (ICA), a previously proposed approach to attenuate ECG artifacts in the EEG signal. The power spectrum and the cortico-cortical connectivity of the ECG attenuated EEG was compared against the power spectrum and the cortico-cortical connectivity of the raw EEG. The null-coherence approach attenuated the ECG contamination without leaving any residual of the ECG in the EEG. We show that the null-coherence approach performs better than ICA in attenuating the ECG contamination without enhancing cortico-cortical connectivity. Our analysis suggests that using ICA to remove ECG contamination from the EEG suffers from redistribution problems, whereas the null-coherence approach does not. We show that both the null-coherence and ICA approaches attenuate the ECG contamination. However, the EEG obtained after ICA cleaning displayed higher cortico-cortical connectivity compared with that obtained using the null-coherence approach. This suggests that null-coherence is superior to ICA in attenuating the ECG interference in EEG for cortico-cortical connectivity analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Network Analysis Reveals Putative Genes Affecting Meat Quality in Angus Cattle.
Mateescu, Raluca G; Garrick, Dorian J; Reecy, James M
2017-01-01
Improvements in eating satisfaction will benefit consumers and should increase beef demand which is of interest to the beef industry. Tenderness, juiciness, and flavor are major determinants of the palatability of beef and are often used to reflect eating satisfaction. Carcass qualities are used as indicator traits for meat quality, with higher quality grade carcasses expected to relate to more tender and palatable meat. However, meat quality is a complex concept determined by many component traits making interpretation of genome-wide association studies (GWAS) on any one component challenging to interpret. Recent approaches combining traditional GWAS with gene network interactions theory could be more efficient in dissecting the genetic architecture of complex traits. Phenotypic measures of 23 traits reflecting carcass characteristics, components of meat quality, along with mineral and peptide concentrations were used along with Illumina 54k bovine SNP genotypes to derive an annotated gene network associated with meat quality in 2,110 Angus beef cattle. The efficient mixed model association (EMMAX) approach in combination with a genomic relationship matrix was used to directly estimate the associations between 54k SNP genotypes and each of the 23 component traits. Genomic correlated regions were identified by partial correlations which were further used along with an information theory algorithm to derive gene network clusters. Correlated SNP across 23 component traits were subjected to network scoring and visualization software to identify significant SNP. Significant pathways implicated in the meat quality complex through GO term enrichment analysis included angiogenesis, inflammation, transmembrane transporter activity, and receptor activity. These results suggest that network analysis using partial correlations and annotation of significant SNP can reveal the genetic architecture of complex traits and provide novel information regarding biological mechanisms and genes that lead to complex phenotypes, like meat quality, and the nutritional and healthfulness value of beef. Improvements in genome annotation and knowledge of gene function will contribute to more comprehensive analyses that will advance our ability to dissect the complex architecture of complex traits.
NASA Astrophysics Data System (ADS)
Ketcham, Richard A.
2017-04-01
Anisotropy in three-dimensional quantities such as geometric shape and orientation is commonly quantified using principal components analysis, in which a second order tensor determines the orientations of orthogonal components and their relative magnitudes. This approach has many advantages, such as simplicity and ability to accommodate many forms of data, and resilience to data sparsity. However, when data are sufficiently plentiful and precise, they sometimes show that aspects of the principal components approach are oversimplifications that may affect how the data are interpreted or extrapolated for mathematical or physical modeling. High-resolution X-ray computed tomography (CT) can effectively extract thousands of measurements from a single sample, providing a data density sufficient to examine the ways in which anisotropy on the hand-sample scale and smaller can be quantified, and the extent to which the ways the data are simplified are faithful to the underlying distributions. Features within CT data can be considered as discrete objects or continuum fabrics; the latter can be characterized using a variety of metrics, such as the most commonly used mean intercept length, and also the more specialized star length and star volume distributions. Each method posits a different scaling among components that affects the measured degree of anisotropy. The star volume distribution is the most sensitive to anisotropy, and commonly distinguishes strong fabric components that are not orthogonal. Although these data are well-presented using a stereoplot, 3D rose diagrams are another visualization option that can often help identify these components. This talk presents examples from a number of cases, starting with trabecular bone and extending to geological features such as fractures and brittle and ductile fabrics, in which non-orthogonal principal components identified using CT provide some insight into the origin of the underlying structures, and how they should be interpreted and potentially up-scaled.
Kann, Birthe; Windbergs, Maike
2013-04-01
Confocal Raman microscopy is an analytical technique with a steadily increasing impact in the field of pharmaceutics as the instrumental setup allows for nondestructive visualization of component distribution within drug delivery systems. Here, the attention is mainly focused on classic solid carrier systems like tablets, pellets, or extrudates. Due to the opacity of these systems, Raman analysis is restricted either to exterior surfaces or cross sections. As Raman spectra are only recorded from one focal plane at a time, the sample is usually altered to create a smooth and even surface. However, this manipulation can lead to misinterpretation of the analytical results. Here, we present a trendsetting approach to overcome these analytical pitfalls with a combination of confocal Raman microscopy and optical profilometry. By acquiring a topography profile of the sample area of interest prior to Raman spectroscopy, the profile height information allowed to level the focal plane to the sample surface for each spectrum acquisition. We first demonstrated the basic principle of this complementary approach in a case study using a tilted silica wafer. In a second step, we successfully adapted the two techniques to investigate an extrudate and a lyophilisate as two exemplary solid drug carrier systems. Component distribution analysis with the novel analytical approach was neither hampered by the curvature of the cylindrical extrudate nor the highly structured surface of the lyophilisate. Therefore, the combined analytical approach bears a great potential to be implemented in diversified fields of pharmaceutical sciences.
Hemmateenejad, Bahram; Yazdani, Mahdieh
2009-02-16
Steroids are widely distributed in nature and are found in plants, animals, and fungi in abundance. A data set consists of a diverse set of steroids have been used to develop quantitative structure-electrochemistry relationship (QSER) models for their half-wave reduction potential. Modeling was established by means of multiple linear regression (MLR) and principle component regression (PCR) analyses. In MLR analysis, the QSPR models were constructed by first grouping descriptors and then stepwise selection of variables from each group (MLR1) and stepwise selection of predictor variables from the pool of all calculated descriptors (MLR2). Similar procedure was used in PCR analysis so that the principal components (or features) were extracted from different group of descriptors (PCR1) and from entire set of descriptors (PCR2). The resulted models were evaluated using cross-validation, chance correlation, application to prediction reduction potential of some test samples and accessing applicability domain. Both MLR approaches represented accurate results however the QSPR model found by MLR1 was statistically more significant. PCR1 approach produced a model as accurate as MLR approaches whereas less accurate results were obtained by PCR2 approach. In overall, the correlation coefficients of cross-validation and prediction of the QSPR models resulted from MLR1, MLR2 and PCR1 approaches were higher than 90%, which show the high ability of the models to predict reduction potential of the studied steroids.
Inter-subject phase synchronization for exploratory analysis of task-fMRI.
Bolt, Taylor; Nomi, Jason S; Vij, Shruti G; Chang, Catie; Uddin, Lucina Q
2018-08-01
Analysis of task-based fMRI data is conventionally carried out using a hypothesis-driven approach, where blood-oxygen-level dependent (BOLD) time courses are correlated with a hypothesized temporal structure. In some experimental designs, this temporal structure can be difficult to define. In other cases, experimenters may wish to take a more exploratory, data-driven approach to detecting task-driven BOLD activity. In this study, we demonstrate the efficiency and power of an inter-subject synchronization approach for exploratory analysis of task-based fMRI data. Combining the tools of instantaneous phase synchronization and independent component analysis, we characterize whole-brain task-driven responses in terms of group-wise similarity in temporal signal dynamics of brain networks. We applied this framework to fMRI data collected during performance of a simple motor task and a social cognitive task. Analyses using an inter-subject phase synchronization approach revealed a large number of brain networks that dynamically synchronized to various features of the task, often not predicted by the hypothesized temporal structure of the task. We suggest that this methodological framework, along with readily available tools in the fMRI community, provides a powerful exploratory, data-driven approach for analysis of task-driven BOLD activity. Copyright © 2018 Elsevier Inc. All rights reserved.
Smoothing of the bivariate LOD score for non-normal quantitative traits.
Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John
2005-12-30
Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.
NASA Astrophysics Data System (ADS)
Bhushan, A.; Sharker, M. H.; Karimi, H. A.
2015-07-01
In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.
Advanced Modeling Strategies for the Analysis of Tile-Reinforced Composite Armor
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Chen, Tzi-Kang
1999-01-01
A detailed investigation of the deformation mechanisms in tile-reinforced armored components was conducted to develop the most efficient modeling strategies for the structural analysis of large components of the Composite Armored Vehicle. The limitations of conventional finite elements with respect to the analysis of tile-reinforced structures were examined, and two complementary optimal modeling strategies were developed. These strategies are element layering and the use of a tile-adhesive superelement. Element layering is a technique that uses stacks of shear deformable shell elements to obtain the proper transverse shear distributions through the thickness of the laminate. The tile-adhesive superelement consists of a statically condensed substructure model designed to take advantage of periodicity in tile placement patterns to eliminate numerical redundancies in the analysis. Both approaches can be used simultaneously to create unusually efficient models that accurately predict the global response by incorporating the correct local deformation mechanisms.
A component-centered meta-analysis of family-based prevention programs for adolescent substance use.
Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien
2016-04-01
Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Foch, Eric; Milner, Clare E
2014-01-03
Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.
EMPCA and Cluster Analysis of Quasar Spectra: Construction and Application to Simulated Spectra
NASA Astrophysics Data System (ADS)
Marrs, Adam; Leighly, Karen; Wagner, Cassidy; Macinnis, Francis
2017-01-01
Quasars have complex spectra with emission lines influenced by many factors. Therefore, to fully describe the spectrum requires specification of a large number of parameters, such as line equivalent width, blueshift, and ratios. Principal Component Analysis (PCA) aims to construct eigenvectors-or principal components-from the data with the goal of finding a few key parameters that can be used to predict the rest of the spectrum fairly well. Analysis of simulated quasar spectra was used to verify and justify our modified application of PCA.We used a variant of PCA called Weighted Expectation Maximization PCA (EMPCA; Bailey 2012) along with k-means cluster analysis to analyze simulated quasar spectra. Our approach combines both analytical methods to address two known problems with classical PCA. EMPCA uses weights to account for uncertainty and missing points in the spectra. K-means groups similar spectra together to address the nonlinearity of quasar spectra, specifically variance in blueshifts and widths of the emission lines.In producing and analyzing simulations, we first tested the effects of varying equivalent widths and blueshifts on the derived principal components, and explored the differences between standard PCA and EMPCA. We also tested the effects of varying signal-to-noise ratio. Next we used the results of fits to composite quasar spectra (see accompanying poster by Wagner et al.) to construct a set of realistic simulated spectra, and subjected those spectra to the EMPCA /k-means analysis. We concluded that our approach was validated when we found that the mean spectra from our k-means clusters derived from PCA projection coefficients reproduced the trends observed in the composite spectra.Furthermore, our method needed only two eigenvectors to identify both sets of correlations used to construct the simulations, as well as indicating the linear and nonlinear segments. Comparing this to regular PCA, which can require a dozen or more components, or to direct spectral analysis that may need measurement of 20 fit parameters, shows why the dual application of these two techniques is such a powerful tool.
A semi-supervised classification algorithm using the TAD-derived background as training data
NASA Astrophysics Data System (ADS)
Fan, Lei; Ambeau, Brittany; Messinger, David W.
2013-05-01
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem
NASA Technical Reports Server (NTRS)
Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex
NASA Technical Reports Server (NTRS)
Shelhamer, M.
2001-01-01
It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.
Analysis of spectrally resolved autofluorescence images by support vector machines
NASA Astrophysics Data System (ADS)
Mateasik, A.; Chorvat, D.; Chorvatova, A.
2013-02-01
Spectral analysis of the autofluorescence images of isolated cardiac cells was performed to evaluate and to classify the metabolic state of the cells in respect to the responses to metabolic modulators. The classification was done using machine learning approach based on support vector machine with the set of the automatically calculated features from recorded spectral profile of spectral autofluorescence images. This classification method was compared with the classical approach where the individual spectral components contributing to cell autofluorescence were estimated by spectral analysis, namely by blind source separation using non-negative matrix factorization. Comparison of both methods showed that machine learning can effectively classify the spectrally resolved autofluorescence images without the need of detailed knowledge about the sources of autofluorescence and their spectral properties.
Linkage Analysis of Urine Arsenic Species Patterns in the Strong Heart Family Study
Gribble, Matthew O.; Voruganti, Venkata Saroja; Cole, Shelley A.; Haack, Karin; Balakrishnan, Poojitha; Laston, Sandra L.; Tellez-Plaza, Maria; Francesconi, Kevin A.; Goessler, Walter; Umans, Jason G.; Thomas, Duncan C.; Gilliland, Frank; North, Kari E.; Franceschini, Nora; Navas-Acien, Ana
2015-01-01
Arsenic toxicokinetics are important for disease risks in exposed populations, but genetic determinants are not fully understood. We examined urine arsenic species patterns measured by HPLC-ICPMS among 2189 Strong Heart Study participants 18 years of age and older with data on ∼400 genome-wide microsatellite markers spaced ∼10 cM and arsenic speciation (683 participants from Arizona, 684 from Oklahoma, and 822 from North and South Dakota). We logit-transformed % arsenic species (% inorganic arsenic, %MMA, and %DMA) and also conducted principal component analyses of the logit % arsenic species. We used inverse-normalized residuals from multivariable-adjusted polygenic heritability analysis for multipoint variance components linkage analysis. We also examined the contribution of polymorphisms in the arsenic metabolism gene AS3MT via conditional linkage analysis. We localized a quantitative trait locus (QTL) on chromosome 10 (LOD 4.12 for %MMA, 4.65 for %DMA, and 4.84 for the first principal component of logit % arsenic species). This peak was partially but not fully explained by measured AS3MT variants. We also localized a QTL for the second principal component of logit % arsenic species on chromosome 5 (LOD 4.21) that was not evident from considering % arsenic species individually. Some other loci were suggestive or significant for 1 geographical area but not overall across all areas, indicating possible locus heterogeneity. This genome-wide linkage scan suggests genetic determinants of arsenic toxicokinetics to be identified by future fine-mapping, and illustrates the utility of principal component analysis as a novel approach that considers % arsenic species jointly. PMID:26209557
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Malaquias, José B; Ramalho, Francisco S; Dos S Dias, Carlos T; Brugger, Bruno P; S Lira, Aline Cristina; Wilcken, Carlos F; Pachú, Jéssica K S; Zanuncio, José C
2017-02-09
The relationship between pests and natural enemies using multivariate analysis on cotton in different spacing has not been documented yet. Using multivariate approaches is possible to optimize strategies to control Aphis gossypii at different crop spacings because the possibility of a better use of the aphid sampling strategies as well as the conservation and release of its natural enemies. The aims of the study were (i) to characterize the temporal abundance data of aphids and its natural enemies using principal components, (ii) to analyze the degree of correlation between the insects and between groups of variables (pests and natural enemies), (iii) to identify the main natural enemies responsible for regulating A. gossypii populations, and (iv) to investigate the similarities in arthropod occurrence patterns at different spacings of cotton crops over two seasons. High correlations in the occurrence of Scymnus rubicundus with aphids are shown through principal component analysis and through the important role the species plays in canonical correlation analysis. Clustering the presence of apterous aphids matches the pattern verified for Chrysoperla externa at the three different spacings between rows. Our results indicate that S. rubicundus is the main candidate to regulate the aphid populations in all spacings studied.
Malaquias, José B.; Ramalho, Francisco S.; dos S. Dias, Carlos T.; Brugger, Bruno P.; S. Lira, Aline Cristina; Wilcken, Carlos F.; Pachú, Jéssica K. S.; Zanuncio, José C.
2017-01-01
The relationship between pests and natural enemies using multivariate analysis on cotton in different spacing has not been documented yet. Using multivariate approaches is possible to optimize strategies to control Aphis gossypii at different crop spacings because the possibility of a better use of the aphid sampling strategies as well as the conservation and release of its natural enemies. The aims of the study were (i) to characterize the temporal abundance data of aphids and its natural enemies using principal components, (ii) to analyze the degree of correlation between the insects and between groups of variables (pests and natural enemies), (iii) to identify the main natural enemies responsible for regulating A. gossypii populations, and (iv) to investigate the similarities in arthropod occurrence patterns at different spacings of cotton crops over two seasons. High correlations in the occurrence of Scymnus rubicundus with aphids are shown through principal component analysis and through the important role the species plays in canonical correlation analysis. Clustering the presence of apterous aphids matches the pattern verified for Chrysoperla externa at the three different spacings between rows. Our results indicate that S. rubicundus is the main candidate to regulate the aphid populations in all spacings studied. PMID:28181503
NASA Astrophysics Data System (ADS)
Malaquias, José B.; Ramalho, Francisco S.; Dos S. Dias, Carlos T.; Brugger, Bruno P.; S. Lira, Aline Cristina; Wilcken, Carlos F.; Pachú, Jéssica K. S.; Zanuncio, José C.
2017-02-01
The relationship between pests and natural enemies using multivariate analysis on cotton in different spacing has not been documented yet. Using multivariate approaches is possible to optimize strategies to control Aphis gossypii at different crop spacings because the possibility of a better use of the aphid sampling strategies as well as the conservation and release of its natural enemies. The aims of the study were (i) to characterize the temporal abundance data of aphids and its natural enemies using principal components, (ii) to analyze the degree of correlation between the insects and between groups of variables (pests and natural enemies), (iii) to identify the main natural enemies responsible for regulating A. gossypii populations, and (iv) to investigate the similarities in arthropod occurrence patterns at different spacings of cotton crops over two seasons. High correlations in the occurrence of Scymnus rubicundus with aphids are shown through principal component analysis and through the important role the species plays in canonical correlation analysis. Clustering the presence of apterous aphids matches the pattern verified for Chrysoperla externa at the three different spacings between rows. Our results indicate that S. rubicundus is the main candidate to regulate the aphid populations in all spacings studied.
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1992-01-01
The mobility power flow approach that was previously applied in the derivation of expressions for the vibrational power flow between coupled plate substructures forming an L configuration and subjected to mechanical loading is generalized. Using the generalized expressions, both point and distributed mechanical loads on one or both of the plates can be considered. The generalized approach is extended to deal with acoustic excitation of one of the plate substructures. In this case, the forces (acoustic pressures) acting on the structure are dependent on the response of the structure because of the scattered pressure component. The interaction between the plate structure and the acoustic fluid leads to the derivation of a corrected mode shape for the plates' normal surface velocity and also for the structure mobility functions. The determination of the scattered pressure components in the expressions for the power flow represents an additional component in the power flow balance for the source plate and the receiver plate. This component represents the radiated acoustical power from the plate structure. For a number of coupled plate substrates, the acoustic pressure generated by one substructure will interact with the motion of another substructure. That is, in the case of the L-shaped plate, acoustic interaction exists between the two plate substructures due to the generation of the acoustic waves by each of the substructures. An approach to deal with this phenomena is described.
NASA Technical Reports Server (NTRS)
CraigMcClung, R.; Lee, Yi-Der; Cardinal, Joseph W.; Guo, Yajun
2012-01-01
The elastic stress intensity factor (SIF, commonly denoted as K) is the foundation of practical fracture mechanics (FM) analysis for aircraft structures. This single parameter describes the first-order effects of stress magnitude and distribution as well as the geometry of both structure/component and crack. Hence, the calculation of K is often the most significant step in fatigue analysis based on FM. This presentation will provide several reflections on the current state-of-the-art in SIF solution methods used for practical aerospace applications, including a brief historical perspective, descriptions of some recent and ongoing advances, and comments on some remaining challenges. Newman and Raju made significant early contributions to practical structural analysis by developing closed-form SIF equations for surface and corner cracks in simplified geometries, often based on empirical fits of finite element (FE) solutions. Those solutions (and others like them) were sometimes revised as new analyses were conducted or limitations discovered. The foundational solutions have exhibited striking longevity, despite the relatively "coarse" FE models employed many decades ago. However, in recent years, the accumulation of different generations of solutions for the same nominal geometry has led to some confusion (which solution is correct?), and steady increases in computational capabilities have facilitated the discovery of inaccuracies in some (not all!) of the legacy solutions. Some examples of problems and solutions are presented and discussed, including the challenge of maintaining consistency with legacy design applications. As computational power has increased, the prospect of calculating large numbers of SIF solutions for specific complex geometries with advanced numerical methods has grown more attractive. Fawaz and Andersson, for example, have been generating literally millions of new SIF solutions for different combinations of multiple cracks under simplified loading schemes using p-version FE methods. These data are invaluable, but questions remain about their practical use, because the tabular databases of key results needed to support practical life analysis can occupy gigabytes of storage for only a few classes of geometries. The prospect of using such advanced numerical methods to calculate in real time only those K solutions actually needed to support a specific crack growth analysis is also tempting, but the stark reality is that the computational cost is still so high that the approach is not practical except for specific, critical application problems. Some thoughts are offered about alternative paradigms. Compounding approaches are some of the earliest building blocks of SIF development for more complex geometries. These approaches are especially attractive because of their very low computational cost and their conceptual robustness; they are, in some ways, an intriguing contrast and complement to the brute-force numerical methods. In recent years, researchers at NRC-Canada have published remarkable results showing how compounding approaches can be used to generate accurate solutions for very difficult problems. Examples are provided of some successes--and some limitations--using this approach. These closed-form, tabulated numerical, and compounding approaches have typically been used for simple remote loading with simple load paths to the crack. However, many significant cracks occur in complex stress gradient fields. This is a job for weight function (WF) methods, where the arbitrary stress distribution on the crack plane in the corresponding uncracked body (typically determined using FE methods) is used to determine K. Several significant recent advances in WF methods and solutions are highlighted here. Fueled by advanced 3D numerical methods, many new solutions have been generated for classic geometries such as surface and corner cracks with wide ranges of geometrical validity. A new WF formulation has also be developed for part-through cracks considering the arbitrary stress gradients in all directions in the crack plane (so-called bivariant solutions). Basic WF methods have recently been combined with analytical expressions for crack plane stresses to develop a large family of accurate SIF solutions for corner, surface, and through cracks at internal or external notches with very wide ranges of shapes, sizes, acuities, and offsets. Finally, WF solutions are much faster than FE or boundary element solutions, but can still be much slower than simple closed-form solutions, especially for bivariant solutions that can require 2D numerical integration. Novel pre-integration and dynamic tabular methods have been developed that substantially increase the speed of these advanced WF solutions. The practical utility of advanced SIF methods, including both WF and direct numerical methods, is greatly enhanced if the FM life analysis can be directly and efficiently linked with digital models of the actual structure or component (e.g., FE models for stress analysis). Two recent advances of this type will be described. One approach directly interfaces the FM life analysis with the FE model of the uncracked component (including stress results). Through a powerful graphical user interface, simplified FM life models can be constructed (and visualized) directly on the component model, with the computer collecting the geometry and stress gradient information needed for the life calculation. An even more powerful paradigm uses expert logic to automatically build an optimum simple fracture model at any and every desired location in the component model, perform the life calculation, and even generate fatigue crack growth life contour maps, all with minimal user intervention. This paradigm has also been extended to the automatic calculation of fracture risk, considering uncertainty or variability in key input parameters such as initial crack size or location. Another new integrated approach links the engineering life analysis, the component model, and a 3D numerical fracture analysis built with the same component model to generate a table of SIF values at a specific location that can then be employed efficiently to perform the life calculation. Some attention must be given to verification and validation (V&V) issues and challenges: how good are these SIF solutions, how good is good enough, and does anyone believe the life answer? It is important to think critically about the different sources of error or uncertainty and to perform V&V in a hierarchal, building-block manner. Some accuracy issues for SIF solutions, for example, may actually involve independent material behavior issues, such as constraint loss effects for crack fronts near component surfaces, and can be a source of confusion. Recommendations are proposed for improved V&V approaches. This presentation will briefly but critically survey the range of issues and advances mentioned above, with a particular view towards assembling an integrated approach that combines different methods to create practical tools for real-world design and analysis problems. Examples will be selectively drawn from the recent literature, from recent enhancements in the NASGRO and DARWIN computer codes, and from previously unpublished research
Prades, Joan; Ferro, Tàrsila; Gil, Francisco; Borras, Josep M
2014-10-01
This study sought to assess the impact of health care professional (HCP) communication on breast cancer patients across the acute care process as perceived by patients. Methodological approach was based on eight focus groups conducted with a sample of patients (n = 37) drawn from 15 Spanish Regions; thematic analysis was undertaken using the National Cancer Institute (NCI) framework of HCP communication as the theoretical basis. Relevant results of this study were the identification of four main communication components: (1) reassurance in coping with uncertainty after symptom detection and prompt access until confirmed diagnosis; (2) fostering involvement before delivering treatments, by anticipating information on practical and emotional illness-related issues; (3) guidance on the different therapeutic options, through use of clinical scenarios; and, (4) eliciting the feeling of emotional exhaustion after ending treatments and addressing the management of potential treatment-related effects. These communication-related components highlighted the need for a comprehensive approach in this area of cancer care. Copyright © 2014 Elsevier Ltd. All rights reserved.
Orlando, Ron
2010-01-01
The ability to quantitatively determine changes is an essential component of comparative glycomics. Multiple strategies are available by which this can be accomplished. These include label-free approaches and strategies where an isotopic label is incorporated into the glycans prior to analysis. The focus of this chapter is to describe each of these approaches while providing insight into their strengths and weaknesses, so that glycomic investigators can make an educated choice of the strategy that is best suited for their particular application.
Quantitative analysis of glycoprotein glycans.
Orlando, Ron
2013-01-01
The ability to quantitatively determine changes in the N- and O-linked glycans is an essential component of comparative glycomics. Multiple strategies are available to by which this can be accomplished, including; both label free approaches and isotopic labeling strategies. The focus of this chapter is to describe each of these approaches while providing insight into their strengths and weaknesses, so that glycomic investigators can make an educated choice of the strategy that is best suited for their particular application.
Equilibrium Phase Behavior of the Square-Well Linear Microphase-Forming Model.
Zhuang, Yuan; Charbonneau, Patrick
2016-07-07
We have recently developed a simulation approach to calculate the equilibrium phase diagram of particle-based microphase formers. Here, this approach is used to calculate the phase behavior of the square-well linear model for different strengths and ranges of the linear long-range repulsive component. The results are compared with various theoretical predictions for microphase formation. The analysis further allows us to better understand the mechanism for microphase formation in colloidal suspensions.
Zhou, Jin J.; Cho, Michael H.; Lange, Christoph; Lutz, Sharon; Silverman, Edwin K.; Laird, Nan M.
2015-01-01
Many correlated disease variables are analyzed jointly in genetic studies in the hope of increasing power to detect causal genetic variants. One approach involves assessing the relationship between each phenotype and each single nucleotide polymorphism (SNP) individually and using a Bonferroni correction for the effective number of tests conducted. Alternatively, one can apply a multivariate regression or a dimension reduction technique, such as principal component analysis (PCA), and test for the association with the principal components (PC) of the phenotypes rather than the individual phenotypes. Inspired by the previous approaches of combining phenotypes to maximize heritability at individual SNPs, in this paper, we propose to construct a maximally heritable phenotype (MaxH) by taking advantage of the estimated total heritability and co-heritability. The heritability and co-heritability only need to be estimated once, therefore our method is applicable to genome-wide scans. MaxH phenotype is a linear combination of the individual phenotypes with increased heritability and power over the phenotypes being combined. Simulations show that the heritability and power achieved agree well with the theory for large samples and two phenotypes. We compare our approach with commonly used methods and assess both the heritability and the power of the MaxH phenotype. Moreover we provide suggestions for how to choose the phenotypes for combination. An application of our approach to a COPD genome-wide association study shows the practical relevance. PMID:26111731
Tensorial extensions of independent component analysis for multisubject FMRI analysis.
Beckmann, C F; Smith, S M
2005-03-01
We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundstrom, J.; Tash, B; Murakami, T
2009-01-01
The molecular function of occludin, an integral membrane component of tight junctions, remains unclear. VEGF-induced phosphorylation sites were mapped on occludin by combining MS data analysis with bioinformatics. In vivo phosphorylation of Ser490 was validated and protein interaction studies combined with crystal structure analysis suggest that Ser490 phosphorylation attenuates the interaction between occludin and ZO-1. This study demonstrates that combining MS data and bioinformatics can successfully identify novel phosphorylation sites from limiting samples.
Candida parapsilosis biofilm identification by Raman spectroscopy.
Samek, Ota; Mlynariková, Katarina; Bernatová, Silvie; Ježek, Jan; Krzyžánek, Vladislav; Šiler, Martin; Zemánek, Pavel; Růžička, Filip; Holá, Veronika; Mahelová, Martina
2014-12-22
Colonies of Candida parapsilosis on culture plates were probed directly in situ using Raman spectroscopy for rapid identification of specific strains separated by a given time intervals (up to months apart). To classify the Raman spectra, data analysis was performed using the approach of principal component analysis (PCA). The analysis of the data sets generated during the scans of individual colonies reveals that despite the inhomogeneity of the biological samples unambiguous associations to individual strains (two biofilm-positive and two biofilm-negative) could be made.
Candida parapsilosis Biofilm Identification by Raman Spectroscopy
Samek, Ota; Mlynariková, Katarina; Bernatová, Silvie; Ježek, Jan; Krzyžánek, Vladislav; Šiler, Martin; Zemánek, Pavel; Růžička, Filip; Holá, Veronika; Mahelová, Martina
2014-01-01
Colonies of Candida parapsilosis on culture plates were probed directly in situ using Raman spectroscopy for rapid identification of specific strains separated by a given time intervals (up to months apart). To classify the Raman spectra, data analysis was performed using the approach of principal component analysis (PCA). The analysis of the data sets generated during the scans of individual colonies reveals that despite the inhomogeneity of the biological samples unambiguous associations to individual strains (two biofilm-positive and two biofilm-negative) could be made. PMID:25535081
A model of objective weighting for EIA.
Ying, L G; Liu, Y C
1995-06-01
In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.
Osis, Sean T; Hettinga, Blayne A; Ferber, Reed
2016-05-01
An ongoing challenge in the application of gait analysis to clinical settings is the standardized detection of temporal events, with unobtrusive and cost-effective equipment, for a wide range of gait types. The purpose of the current study was to investigate a targeted machine learning approach for the prediction of timing for foot strike (or initial contact) and toe-off, using only kinematics for walking, forefoot running, and heel-toe running. Data were categorized by gait type and split into a training set (∼30%) and a validation set (∼70%). A principal component analysis was performed, and separate linear models were trained and validated for foot strike and toe-off, using ground reaction force data as a gold-standard for event timing. Results indicate the model predicted both foot strike and toe-off timing to within 20ms of the gold-standard for more than 95% of cases in walking and running gaits. The machine learning approach continues to provide robust timing predictions for clinical use, and may offer a flexible methodology to handle new events and gait types. Copyright © 2016 Elsevier B.V. All rights reserved.