Advanced Treatment Monitoring for Olympic-Level Athletes Using Unsupervised Modeling Techniques
Siedlik, Jacob A.; Bergeron, Charles; Cooper, Michael; Emmons, Russell; Moreau, William; Nabhan, Dustin; Gallagher, Philip; Vardiman, John P.
2016-01-01
Context Analysis of injury and illness data collected at large international competitions provides the US Olympic Committee and the national governing bodies for each sport with information to best prepare for future competitions. Research in which authors have evaluated medical contacts to provide the expected level of medical care and sports medicine services at international competitions is limited. Objective To analyze the medical-contact data for athletes, staff, and coaches who participated in the 2011 Pan American Games in Guadalajara, Mexico, using unsupervised modeling techniques to identify underlying treatment patterns. Design Descriptive epidemiology study. Setting Pan American Games. Patients or Other Participants A total of 618 US athletes (337 males, 281 females) participated in the 2011 Pan American Games. Main Outcome Measure(s) Medical data were recorded from the injury-evaluation and injury-treatment forms used by clinicians assigned to the central US Olympic Committee Sport Medicine Clinic and satellite locations during the operational 17-day period of the 2011 Pan American Games. We used principal components analysis and agglomerative clustering algorithms to identify and define grouped modalities. Lift statistics were calculated for within-cluster subgroups. Results Principal component analyses identified 3 components, accounting for 72.3% of the variability in datasets. Plots of the principal components showed that individual contacts focused on 4 treatment clusters: massage, paired manipulation and mobilization, soft tissue therapy, and general medical. Conclusions Unsupervised modeling techniques were useful for visualizing complex treatment data and provided insights for improved treatment modeling in athletes. Given its ability to detect clinically relevant treatment pairings in large datasets, unsupervised modeling should be considered a feasible option for future analyses of medical-contact data from international competitions. PMID:26794628
Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki
2015-04-30
Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Wenjian; Singh, Rajiv R. P.; Scalettar, Richard T.
Here, we apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models - the square and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-one Ising (BSI) model, and the 2D XY model, and examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow exploration of different phases and symmetry-breaking, but can distinguish phase transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which ismore » particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the 'charge' correlations (vorticity) in the BSI model (XY model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the 'antoencoder method', and demonstrate that it too can be trained to capture phase transitions and critical points.« less
Hu, Wenjian; Singh, Rajiv R. P.; Scalettar, Richard T.
2017-06-19
Here, we apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models - the square and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-one Ising (BSI) model, and the 2D XY model, and examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow exploration of different phases and symmetry-breaking, but can distinguish phase transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which ismore » particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the 'charge' correlations (vorticity) in the BSI model (XY model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the 'antoencoder method', and demonstrate that it too can be trained to capture phase transitions and critical points.« less
NASA Astrophysics Data System (ADS)
Hu, Wenjian; Singh, Rajiv R. P.; Scalettar, Richard T.
2017-06-01
We apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models—the square- and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-1 Ising (BSI) model, and the two-dimensional X Y model—and we examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow the exploration of different phases and symmetry-breaking, but they can distinguish phase-transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which is particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the "charge" correlations (vorticity) in the BSI model (X Y model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the "autoencoder method," and we demonstrate that it too can be trained to capture phase transitions and critical points.
Taguchi, Y-H
2016-05-10
MicroRNA(miRNA)-mRNA interactions are important for understanding many biological processes, including development, differentiation and disease progression, but their identification is highly context-dependent. When computationally derived from sequence information alone, the identification should be verified by integrated analyses of mRNA and miRNA expression. The drawback of this strategy is the vast number of identified interactions, which prevents an experimental or detailed investigation of each pair. In this paper, we overcome this difficulty by the recently proposed principal component analysis (PCA)-based unsupervised feature extraction (FE), which reduces the number of identified miRNA-mRNA interactions that properly discriminate between patients and healthy controls without losing biological feasibility. The approach is applied to six cancers: hepatocellular carcinoma, non-small cell lung cancer, esophageal squamous cell carcinoma, prostate cancer, colorectal/colon cancer and breast cancer. In PCA-based unsupervised FE, the significance does not depend on the number of samples (as in the standard case) but on the number of features, which approximates the number of miRNAs/mRNAs. To our knowledge, we have newly identified miRNA-mRNA interactions in multiple cancers based on a single common (universal) criterion. Moreover, the number of identified interactions was sufficiently small to be sequentially curated by literature searches.
Tsakpinoglou, Florence; Poulin, François
2017-10-01
Best friends exert a substantial influence on rising alcohol and marijuana use during adolescence. Two mechanisms occurring within friendship - friend pressure and unsupervised co-deviancy - may partially capture the way friends influence one another. The current study aims to: (1) examine the psychometric properties of a new instrument designed to assess pressure from a youth's best friend and unsupervised co-deviancy; (2) investigate the relative contribution of these processes to alcohol and marijuana use; and (3) determine whether gender moderates these associations. Data were collected through self-report questionnaires completed by 294 Canadian youths (62% female) across two time points (ages 15-16). Principal component analysis yielded a two-factor solution corresponding to friend pressure and unsupervised co-deviancy. Logistic regressions subsequently showed that unsupervised co-deviancy was predictive of an increase in marijuana use one year later. Neither process predicted an increase in alcohol use. Results did not differ as a function of gender. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Spectral gene set enrichment (SGSE).
Frost, H Robert; Li, Zhigang; Moore, Jason H
2015-03-03
Gene set testing is typically performed in a supervised context to quantify the association between groups of genes and a clinical phenotype. In many cases, however, a gene set-based interpretation of genomic data is desired in the absence of a phenotype variable. Although methods exist for unsupervised gene set testing, they predominantly compute enrichment relative to clusters of the genomic variables with performance strongly dependent on the clustering algorithm and number of clusters. We propose a novel method, spectral gene set enrichment (SGSE), for unsupervised competitive testing of the association between gene sets and empirical data sources. SGSE first computes the statistical association between gene sets and principal components (PCs) using our principal component gene set enrichment (PCGSE) method. The overall statistical association between each gene set and the spectral structure of the data is then computed by combining the PC-level p-values using the weighted Z-method with weights set to the PC variance scaled by Tracy-Widom test p-values. Using simulated data, we show that the SGSE algorithm can accurately recover spectral features from noisy data. To illustrate the utility of our method on real data, we demonstrate the superior performance of the SGSE method relative to standard cluster-based techniques for testing the association between MSigDB gene sets and the variance structure of microarray gene expression data. Unsupervised gene set testing can provide important information about the biological signal held in high-dimensional genomic data sets. Because it uses the association between gene sets and samples PCs to generate a measure of unsupervised enrichment, the SGSE method is independent of cluster or network creation algorithms and, most importantly, is able to utilize the statistical significance of PC eigenvalues to ignore elements of the data most likely to represent noise.
Differential principal component analysis of ChIP-seq.
Ji, Hongkai; Li, Xia; Wang, Qian-fei; Ning, Yang
2013-04-23
We propose differential principal component analysis (dPCA) for analyzing multiple ChIP-sequencing datasets to identify differential protein-DNA interactions between two biological conditions. dPCA integrates unsupervised pattern discovery, dimension reduction, and statistical inference into a single framework. It uses a small number of principal components to summarize concisely the major multiprotein synergistic differential patterns between the two conditions. For each pattern, it detects and prioritizes differential genomic loci by comparing the between-condition differences with the within-condition variation among replicate samples. dPCA provides a unique tool for efficiently analyzing large amounts of ChIP-sequencing data to study dynamic changes of gene regulation across different biological conditions. We demonstrate this approach through analyses of differential chromatin patterns at transcription factor binding sites and promoters as well as allele-specific protein-DNA interactions.
Taguchi, Y-H
2018-05-08
Even though coexistence of multiple phenotypes sharing the same genomic background is interesting, it remains incompletely understood. Epigenomic profiles may represent key factors, with unknown contributions to the development of multiple phenotypes, and social-insect castes are a good model for elucidation of the underlying mechanisms. Nonetheless, previous studies have failed to identify genes associated with aberrant gene expression and methylation profiles because of the lack of suitable methodology that can address this problem properly. A recently proposed principal component analysis (PCA)-based and tensor decomposition (TD)-based unsupervised feature extraction (FE) can solve this problem because these two approaches can deal with gene expression and methylation profiles even when a small number of samples is available. PCA-based and TD-based unsupervised FE methods were applied to the analysis of gene expression and methylation profiles in the brains of two social insects, Polistes canadensis and Dinoponera quadriceps. Genes associated with differential expression and methylation between castes were identified, and analysis of enrichment of Gene Ontology terms confirmed reliability of the obtained sets of genes from the biological standpoint. Biologically relevant genes, shown to be associated with significant differential gene expression and methylation between castes, were identified here for the first time. The identification of these genes may help understand the mechanisms underlying epigenetic control of development of multiple phenotypes under the same genomic conditions.
Genetic Classification of Populations Using Supervised Learning
Bridges, Michael; Heron, Elizabeth A.; O'Dushlaine, Colm; Segurado, Ricardo; Morris, Derek; Corvin, Aiden; Gill, Michael; Pinto, Carlos
2011-01-01
There are many instances in genetics in which we wish to determine whether two candidate populations are distinguishable on the basis of their genetic structure. Examples include populations which are geographically separated, case–control studies and quality control (when participants in a study have been genotyped at different laboratories). This latter application is of particular importance in the era of large scale genome wide association studies, when collections of individuals genotyped at different locations are being merged to provide increased power. The traditional method for detecting structure within a population is some form of exploratory technique such as principal components analysis. Such methods, which do not utilise our prior knowledge of the membership of the candidate populations. are termed unsupervised. Supervised methods, on the other hand are able to utilise this prior knowledge when it is available. In this paper we demonstrate that in such cases modern supervised approaches are a more appropriate tool for detecting genetic differences between populations. We apply two such methods, (neural networks and support vector machines) to the classification of three populations (two from Scotland and one from Bulgaria). The sensitivity exhibited by both these methods is considerably higher than that attained by principal components analysis and in fact comfortably exceeds a recently conjectured theoretical limit on the sensitivity of unsupervised methods. In particular, our methods can distinguish between the two Scottish populations, where principal components analysis cannot. We suggest, on the basis of our results that a supervised learning approach should be the method of choice when classifying individuals into pre-defined populations, particularly in quality control for large scale genome wide association studies. PMID:21589856
Penalized unsupervised learning with outliers
Witten, Daniela M.
2013-01-01
We consider the problem of performing unsupervised learning in the presence of outliers – that is, observations that do not come from the same distribution as the rest of the data. It is known that in this setting, standard approaches for unsupervised learning can yield unsatisfactory results. For instance, in the presence of severe outliers, K-means clustering will often assign each outlier to its own cluster, or alternatively may yield distorted clusters in order to accommodate the outliers. In this paper, we take a new approach to extending existing unsupervised learning techniques to accommodate outliers. Our approach is an extension of a recent proposal for outlier detection in the regression setting. We allow each observation to take on an “error” term, and we penalize the errors using a group lasso penalty in order to encourage most of the observations’ errors to exactly equal zero. We show that this approach can be used in order to develop extensions of K-means clustering and principal components analysis that result in accurate outlier detection, as well as improved performance in the presence of outliers. These methods are illustrated in a simulation study and on two gene expression data sets, and connections with M-estimation are explored. PMID:23875057
Faster tissue interface analysis from Raman microscopy images using compressed factorisation
NASA Astrophysics Data System (ADS)
Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.
2013-06-01
The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.
A novel unsupervised spike sorting algorithm for intracranial EEG.
Yadav, R; Shah, A K; Loeb, J A; Swamy, M N S; Agarwal, R
2011-01-01
This paper presents a novel, unsupervised spike classification algorithm for intracranial EEG. The method combines template matching and principal component analysis (PCA) for building a dynamic patient-specific codebook without a priori knowledge of the spike waveforms. The problem of misclassification due to overlapping classes is resolved by identifying similar classes in the codebook using hierarchical clustering. Cluster quality is visually assessed by projecting inter- and intra-clusters onto a 3D plot. Intracranial EEG from 5 patients was utilized to optimize the algorithm. The resulting codebook retains 82.1% of the detected spikes in non-overlapping and disjoint clusters. Initial results suggest a definite role of this method for both rapid review and quantitation of interictal spikes that could enhance both clinical treatment and research studies on epileptic patients.
An introduction to kernel-based learning algorithms.
Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B
2001-01-01
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.
Unsupervised pattern recognition methods in ciders profiling based on GCE voltammetric signals.
Jakubowska, Małgorzata; Sordoń, Wanda; Ciepiela, Filip
2016-07-15
This work presents a complete methodology of distinguishing between different brands of cider and ageing degrees, based on voltammetric signals, utilizing dedicated data preprocessing procedures and unsupervised multivariate analysis. It was demonstrated that voltammograms recorded on glassy carbon electrode in Britton-Robinson buffer at pH 2 are reproducible for each brand. By application of clustering algorithms and principal component analysis visible homogenous clusters were obtained. Advanced signal processing strategy which included automatic baseline correction, interval scaling and continuous wavelet transform with dedicated mother wavelet, was a key step in the correct recognition of the objects. The results show that voltammetry combined with optimized univariate and multivariate data processing is a sufficient tool to distinguish between ciders from various brands and to evaluate their freshness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder
NASA Astrophysics Data System (ADS)
Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao
2018-04-01
Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.
Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Spinelli, Antonello E.; Boschi, Federico
2011-12-01
Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.
NASA Astrophysics Data System (ADS)
Yi, Wei-song; Cui, Dian-sheng; Li, Zhi; Wu, Lan-lan; Shen, Ai-guo; Hu, Ji-ming
2013-01-01
The manuscript has investigated the application of near-infrared (NIR) spectroscopy for differentiation gastric cancer. The 90 spectra from cancerous and normal tissues were collected from a total of 30 surgical specimens using Fourier transform near-infrared spectroscopy (FT-NIR) equipped with a fiber-optic probe. Major spectral differences were observed in the CH-stretching second overtone (9000-7000 cm-1), CH-stretching first overtone (6000-5200 cm-1), and CH-stretching combination (4500-4000 cm-1) regions. By use of unsupervised pattern recognition, such as principal component analysis (PCA) and cluster analysis (CA), all spectra were classified into cancerous and normal tissue groups with accuracy up to 81.1%. The sensitivity and specificity was 100% and 68.2%, respectively. These present results indicate that CH-stretching first, combination band and second overtone regions can serve as diagnostic markers for gastric cancer.
Using Machine Learning Techniques in the Analysis of Oceanographic Data
NASA Astrophysics Data System (ADS)
Falcinelli, K. E.; Abuomar, S.
2017-12-01
Acoustic Doppler Current Profilers (ADCPs) are oceanographic tools capable of collecting large amounts of current profile data. Using unsupervised machine learning techniques such as principal component analysis, fuzzy c-means clustering, and self-organizing maps, patterns and trends in an ADCP dataset are found. Cluster validity algorithms such as visual assessment of cluster tendency and clustering index are used to determine the optimal number of clusters in the ADCP dataset. These techniques prove to be useful in analysis of ADCP data and demonstrate potential for future use in other oceanographic applications.
Advanced methods in NDE using machine learning approaches
NASA Astrophysics Data System (ADS)
Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank
2018-04-01
Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.
Steingass, Christof Björn; Jutzi, Manfred; Müller, Jenny; Carle, Reinhold; Schmarr, Hans-Georg
2015-03-01
Ripening-dependent changes of pineapple volatiles were studied in a nontargeted profiling analysis. Volatiles were isolated via headspace solid phase microextraction and analyzed by comprehensive 2D gas chromatography and mass spectrometry (HS-SPME-GC×GC-qMS). Profile patterns presented in the contour plots were evaluated applying image processing techniques and subsequent multivariate statistical data analysis. Statistical methods comprised unsupervised hierarchical cluster analysis (HCA) and principal component analysis (PCA) to classify the samples. Supervised partial least squares discriminant analysis (PLS-DA) and partial least squares (PLS) regression were applied to discriminate different ripening stages and describe the development of volatiles during postharvest storage, respectively. Hereby, substantial chemical markers allowing for class separation were revealed. The workflow permitted the rapid distinction between premature green-ripe pineapples and postharvest-ripened sea-freighted fruits. Volatile profiles of fully ripe air-freighted pineapples were similar to those of green-ripe fruits postharvest ripened for 6 days after simulated sea freight export, after PCA with only two principal components. However, PCA considering also the third principal component allowed differentiation between air-freighted fruits and the four progressing postharvest maturity stages of sea-freighted pineapples.
Principal component analysis for fermionic critical points
NASA Astrophysics Data System (ADS)
Costa, Natanael C.; Hu, Wenjian; Bai, Z. J.; Scalettar, Richard T.; Singh, Rajiv R. P.
2017-11-01
We use determinant quantum Monte Carlo (DQMC), in combination with the principal component analysis (PCA) approach to unsupervised learning, to extract information about phase transitions in several of the most fundamental Hamiltonians describing strongly correlated materials. We first explore the zero-temperature antiferromagnet to singlet transition in the periodic Anderson model, the Mott insulating transition in the Hubbard model on a honeycomb lattice, and the magnetic transition in the 1/6-filled Lieb lattice. We then discuss the prospects for learning finite temperature superconducting transitions in the attractive Hubbard model, for which there is no sign problem. Finally, we investigate finite temperature charge density wave (CDW) transitions in the Holstein model, where the electrons are coupled to phonon degrees of freedom, and carry out a finite size scaling analysis to determine Tc. We examine the different behaviors associated with Hubbard-Stratonovich auxiliary field configurations on both the entire space-time lattice and on a single imaginary time slice, or other quantities, such as equal-time Green's and pair-pair correlation functions.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
Mwangi, Benson; Soares, Jair C; Hasan, Khader M
2014-10-30
Neuroimaging machine learning studies have largely utilized supervised algorithms - meaning they require both neuroimaging scan data and corresponding target variables (e.g. healthy vs. diseased) to be successfully 'trained' for a prediction task. Noticeably, this approach may not be optimal or possible when the global structure of the data is not well known and the researcher does not have an a priori model to fit the data. We set out to investigate the utility of an unsupervised machine learning technique; t-distributed stochastic neighbour embedding (t-SNE) in identifying 'unseen' sample population patterns that may exist in high-dimensional neuroimaging data. Multimodal neuroimaging scans from 92 healthy subjects were pre-processed using atlas-based methods, integrated and input into the t-SNE algorithm. Patterns and clusters discovered by the algorithm were visualized using a 2D scatter plot and further analyzed using the K-means clustering algorithm. t-SNE was evaluated against classical principal component analysis. Remarkably, based on unlabelled multimodal scan data, t-SNE separated study subjects into two very distinct clusters which corresponded to subjects' gender labels (cluster silhouette index value=0.79). The resulting clusters were used to develop an unsupervised minimum distance clustering model which identified 93.5% of subjects' gender. Notably, from a neuropsychiatric perspective this method may allow discovery of data-driven disease phenotypes or sub-types of treatment responders. Copyright © 2014 Elsevier B.V. All rights reserved.
Unsupervised learning in general connectionist systems.
Dente, J A; Mendes, R Vilela
1996-01-01
There is a common framework in which different connectionist systems may be treated in a unified way. The general system in which they may all be mapped is a network which, in addition to the connection strengths, has an adaptive node parameter controlling the output intensity. In this paper we generalize two neural network learning schemes to networks with node parameters. In generalized Hebbian learning we find improvements to the convergence rate for small eigenvalues in principal component analysis. For competitive learning the use of node parameters also seems useful in that, by emphasizing or de-emphasizing the dominance of winning neurons, either improved robustness or discrimination is obtained.
Physician performance assessment using a composite quality index.
Liu, Kaibo; Jain, Shabnam; Shi, Jianjun
2013-07-10
Assessing physician performance is important for the purposes of measuring and improving quality of service and reducing healthcare delivery costs. In recent years, physician performance scorecards have been used to provide feedback on individual measures; however, one key challenge is how to develop a composite quality index that combines multiple measures for overall physician performance evaluation. A controversy arises over establishing appropriate weights to combine indicators in multiple dimensions, and cannot be easily resolved. In this study, we proposed a generic unsupervised learning approach to develop a single composite index for physician performance assessment by using non-negative principal component analysis. We developed a new algorithm named iterative quadratic programming to solve the numerical issue in the non-negative principal component analysis approach. We conducted real case studies to demonstrate the performance of the proposed method. We provided interpretations from both statistical and clinical perspectives to evaluate the developed composite ranking score in practice. In addition, we implemented the root cause assessment techniques to explain physician performance for improvement purposes. Copyright © 2012 John Wiley & Sons, Ltd.
System Biology Approach: Gene Network Analysis for Muscular Dystrophy.
Censi, Federica; Calcagnini, Giovanni; Mattei, Eugenio; Giuliani, Alessandro
2018-01-01
Phenotypic changes at different organization levels from cell to entire organism are associated to changes in the pattern of gene expression. These changes involve the entire genome expression pattern and heavily rely upon correlation patterns among genes. The classical approach used to analyze gene expression data builds upon the application of supervised statistical techniques to detect genes differentially expressed among two or more phenotypes (e.g., normal vs. disease). The use of an a posteriori, unsupervised approach based on principal component analysis (PCA) and the subsequent construction of gene correlation networks can shed a light on unexpected behaviour of gene regulation system while maintaining a more naturalistic view on the studied system.In this chapter we applied an unsupervised method to discriminate DMD patient and controls. The genes having the highest absolute scores in the discrimination between the groups were then analyzed in terms of gene expression networks, on the basis of their mutual correlation in the two groups. The correlation network structures suggest two different modes of gene regulation in the two groups, reminiscent of important aspects of DMD pathogenesis.
Metabolic profiles are principally different between cancers of the liver, pancreas and breast.
Budhu, Anuradha; Terunuma, Atsushi; Zhang, Geng; Hussain, S Perwez; Ambs, Stefan; Wang, Xin Wei
2014-01-01
Molecular profiling of primary tumors may facilitate the classification of patients with cancer into more homogenous biological groups to aid clinical management. Metabolomic profiling has been shown to be a powerful tool in characterizing the biological mechanisms underlying a disease but has not been evaluated for its ability to classify cancers by their tissue of origin. Thus, we assessed metabolomic profiling as a novel tool for multiclass cancer characterization. Global metabolic profiling was employed to identify metabolites in paired tumor and non-tumor liver (n=60), breast (n=130) and pancreatic (n=76) tissue specimens. Unsupervised principal component analysis showed that metabolites are principally unique to each tissue and cancer type. Such a difference can also be observed even among early stage cancers, suggesting a significant and unique alteration of global metabolic pathways associated with each cancer type. Our global high-throughput metabolomic profiling study shows that specific biochemical alterations distinguish liver, pancreatic and breast cancer and could be applied as cancer classification tools to differentiate tumors based on tissue of origin.
GO-PCA: An Unsupervised Method to Explore Gene Expression Data Using Prior Knowledge
Wagner, Florian
2015-01-01
Method Genome-wide expression profiling is a widely used approach for characterizing heterogeneous populations of cells, tissues, biopsies, or other biological specimen. The exploratory analysis of such data typically relies on generic unsupervised methods, e.g. principal component analysis (PCA) or hierarchical clustering. However, generic methods fail to exploit prior knowledge about the molecular functions of genes. Here, I introduce GO-PCA, an unsupervised method that combines PCA with nonparametric GO enrichment analysis, in order to systematically search for sets of genes that are both strongly correlated and closely functionally related. These gene sets are then used to automatically generate expression signatures with functional labels, which collectively aim to provide a readily interpretable representation of biologically relevant similarities and differences. The robustness of the results obtained can be assessed by bootstrapping. Results I first applied GO-PCA to datasets containing diverse hematopoietic cell types from human and mouse, respectively. In both cases, GO-PCA generated a small number of signatures that represented the majority of lineages present, and whose labels reflected their respective biological characteristics. I then applied GO-PCA to human glioblastoma (GBM) data, and recovered signatures associated with four out of five previously defined GBM subtypes. My results demonstrate that GO-PCA is a powerful and versatile exploratory method that reduces an expression matrix containing thousands of genes to a much smaller set of interpretable signatures. In this way, GO-PCA aims to facilitate hypothesis generation, design of further analyses, and functional comparisons across datasets. PMID:26575370
GO-PCA: An Unsupervised Method to Explore Gene Expression Data Using Prior Knowledge.
Wagner, Florian
2015-01-01
Genome-wide expression profiling is a widely used approach for characterizing heterogeneous populations of cells, tissues, biopsies, or other biological specimen. The exploratory analysis of such data typically relies on generic unsupervised methods, e.g. principal component analysis (PCA) or hierarchical clustering. However, generic methods fail to exploit prior knowledge about the molecular functions of genes. Here, I introduce GO-PCA, an unsupervised method that combines PCA with nonparametric GO enrichment analysis, in order to systematically search for sets of genes that are both strongly correlated and closely functionally related. These gene sets are then used to automatically generate expression signatures with functional labels, which collectively aim to provide a readily interpretable representation of biologically relevant similarities and differences. The robustness of the results obtained can be assessed by bootstrapping. I first applied GO-PCA to datasets containing diverse hematopoietic cell types from human and mouse, respectively. In both cases, GO-PCA generated a small number of signatures that represented the majority of lineages present, and whose labels reflected their respective biological characteristics. I then applied GO-PCA to human glioblastoma (GBM) data, and recovered signatures associated with four out of five previously defined GBM subtypes. My results demonstrate that GO-PCA is a powerful and versatile exploratory method that reduces an expression matrix containing thousands of genes to a much smaller set of interpretable signatures. In this way, GO-PCA aims to facilitate hypothesis generation, design of further analyses, and functional comparisons across datasets.
Exploiting Secondary Sources for Unsupervised Record Linkage
2004-01-01
paper, we present an extension to Apollo’s active learning component to Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting...Sources address the issue of user involvement. Using secondary sources, a system can autonomously answer questions posed by its active learning component...over, we present how Apollo utilizes the identified sec- ondary sources in an unsupervised active learning pro- cess. Apollo’s learning algorithm
Butaciu, Sinziana; Senila, Marin; Sarbu, Costel; Ponta, Michaela; Tanaselia, Claudiu; Cadar, Oana; Roman, Marius; Radu, Emil; Sima, Mihaela; Frentiu, Tiberiu
2017-04-01
The study proposes a combined model based on diagrams (Gibbs, Piper, Stuyfzand Hydrogeochemical Classification System) and unsupervised statistical approaches (Cluster Analysis, Principal Component Analysis, Fuzzy Principal Component Analysis, Fuzzy Hierarchical Cross-Clustering) to describe natural enrichment of inorganic arsenic and co-occurring species in groundwater in the Banat Plain, southwestern Romania. Speciation of inorganic As (arsenite, arsenate), ion concentrations (Na + , K + , Ca 2+ , Mg 2+ , HCO 3 - , Cl - , F - , SO 4 2- , PO 4 3- , NO 3 - ), pH, redox potential, conductivity and total dissolved substances were performed. Classical diagrams provided the hydrochemical characterization, while statistical approaches were helpful to establish (i) the mechanism of naturally occurring of As and F - species and the anthropogenic one for NO 3 - , SO 4 2- , PO 4 3- and K + and (ii) classification of groundwater based on content of arsenic species. The HCO 3 - type of local groundwater and alkaline pH (8.31-8.49) were found to be responsible for the enrichment of arsenic species and occurrence of F - but by different paths. The PO 4 3- -AsO 4 3- ion exchange, water-rock interaction (silicates hydrolysis and desorption from clay) were associated to arsenate enrichment in the oxidizing aquifer. Fuzzy Hierarchical Cross-Clustering was the strongest tool for the rapid simultaneous classification of groundwaters as a function of arsenic content and hydrogeochemical characteristics. The approach indicated the Na + -F - -pH cluster as marker for groundwater with naturally elevated As and highlighted which parameters need to be monitored. A chemical conceptual model illustrating the natural and anthropogenic paths and enrichment of As and co-occurring species in the local groundwater supported by mineralogical analysis of rocks was established. Copyright © 2016 Elsevier Ltd. All rights reserved.
Feature extraction for ultrasonic sensor based defect detection in ceramic components
NASA Astrophysics Data System (ADS)
Kesharaju, Manasa; Nagarajah, Romesh
2014-02-01
High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.
NASA Astrophysics Data System (ADS)
Tibaduiza, D.-A.; Torres-Arredondo, M.-A.; Mujica, L. E.; Rodellar, J.; Fritzen, C.-P.
2013-12-01
This article is concerned with the practical use of Multiway Principal Component Analysis (MPCA), Discrete Wavelet Transform (DWT), Squared Prediction Error (SPE) measures and Self-Organizing Maps (SOM) to detect and classify damages in mechanical structures. The formalism is based on a distributed piezoelectric active sensor network for the excitation and detection of structural dynamic responses. Statistical models are built using PCA when the structure is known to be healthy either directly from the dynamic responses or from wavelet coefficients at different scales representing Time-frequency information. Different damages on the tested structures are simulated by adding masses at different positions. The data from the structure in different states (damaged or not) are then projected into the different principal component models by each actuator in order to obtain the input feature vectors for a SOM from the scores and the SPE measures. An aircraft fuselage from an Airbus A320 and a multi-layered carbon fiber reinforced plastic (CFRP) plate are used as examples to test the approaches. Results are presented, compared and discussed in order to determine their potential in structural health monitoring. These results showed that all the simulated damages were detectable and the selected features proved capable of separating all damage conditions from the undamaged state for both approaches.
Ciucci, Sara; Ge, Yan; Durán, Claudio; Palladini, Alessandra; Jiménez-Jiménez, Víctor; Martínez-Sánchez, Luisa María; Wang, Yuting; Sales, Susanne; Shevchenko, Andrej; Poser, Steven W.; Herbig, Maik; Otto, Oliver; Androutsellis-Theotokis, Andreas; Guck, Jochen; Gerl, Mathias J.; Cannistraci, Carlo Vittorio
2017-01-01
Omic science is rapidly growing and one of the most employed techniques to explore differential patterns in omic datasets is principal component analysis (PCA). However, a method to enlighten the network of omic features that mostly contribute to the sample separation obtained by PCA is missing. An alternative is to build correlation networks between univariately-selected significant omic features, but this neglects the multivariate unsupervised feature compression responsible for the PCA sample segregation. Biologists and medical researchers often prefer effective methods that offer an immediate interpretation to complicated algorithms that in principle promise an improvement but in practice are difficult to be applied and interpreted. Here we present PC-corr: a simple algorithm that associates to any PCA segregation a discriminative network of features. Such network can be inspected in search of functional modules useful in the definition of combinatorial and multiscale biomarkers from multifaceted omic data in systems and precision biomedicine. We offer proofs of PC-corr efficacy on lipidomic, metagenomic, developmental genomic, population genetic, cancer promoteromic and cancer stem-cell mechanomic data. Finally, PC-corr is a general functional network inference approach that can be easily adopted for big data exploration in computer science and analysis of complex systems in physics. PMID:28287094
Jung, Brian C.; Choi, Soo I.; Du, Annie X.; Cuzzocreo, Jennifer L.; Geng, Zhuo Z.; Ying, Howard S.; Perlman, Susan L.; Toga, Arthur W.; Prince, Jerry L.
2014-01-01
Although “cerebellar ataxia” is often used in reference to a disease process, presumably there are different underlying pathogenetic mechanisms for different subtypes. Indeed, spinocerebellar ataxia (SCA) types 2 and 6 demonstrate complementary phenotypes, thus predicting a different anatomic pattern of degeneration. Here, we show that an unsupervised classification method, based on principal component analysis (PCA) of cerebellar shape characteristics, can be used to separate SCA2 and SCA6 into two classes, which may represent disease-specific archetypes. Patients with SCA2 (n=11) and SCA6 (n=7) were compared against controls (n=15) using PCA to classify cerebellar anatomic shape characteristics. Within the first three principal components, SCA2 and SCA6 differed from controls and from each other. In a secondary analysis, we studied five additional subjects and found that these patients were consistent with the previously defined archetypal clusters of clinical and anatomical characteristics. Secondary analysis of five subjects with related diagnoses showed that disease groups that were clinically and pathophysiologically similar also shared similar anatomic characteristics. Specifically, Archetype #1 consisted of SCA3 (n=1) and SCA2, suggesting that cerebellar syndromes accompanied by atrophy of the pons may be associated with a characteristic pattern of cerebellar neurodegeneration. In comparison, Archetype #2 was comprised of disease groups with pure cerebellar atrophy (episodic ataxia type 2 (n=1), idiopathic late-onset cerebellar ataxias (n=3), and SCA6). This suggests that cerebellar shape analysis could aid in discriminating between different pathologies. Our findings further suggest that magnetic resonance imaging is a promising imaging biomarker that could aid in the diagnosis and therapeutic management in patients with cerebellar syndromes. PMID:22258915
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
Integration of multispectral satellite and hyperspectral field data for aquatic macrophyte studies
NASA Astrophysics Data System (ADS)
John, C. M.; Kavya, N.
2014-11-01
Aquatic macrophytes (AM) can serve as useful indicators of water pollution along the littoral zones. The spectral signatures of various AM were investigated to determine whether species could be discriminated by remote sensing. In this study the spectral readings of different AM communities identified were done using the ASD Fieldspec® Hand Held spectro-radiometer in the wavelength range of 325-1075 nm. The collected specific reflectance spectra were applied to space borne multi-spectral remote sensing data from Worldview-2, acquired on 26th March 2011. The dimensionality reduction of the spectro-radiometric data was done using the technique principal components analysis (PCA). Out of the different PCA axes generated, 93.472 % variance of the spectra was explained by the first axis. The spectral derivative analysis was done to identify the wavelength where the greatest difference in reflectance is shown. The identified wavelengths are 510, 690, 720, 756, 806, 885, 907 and 923 nm. The output of PCA and derivative analysis were applied to Worldview-2 satellite data for spectral subsetting. The unsupervised classification was used to effectively classify the AM species using the different spectral subsets. The accuracy assessment of the results of the unsupervised classification and their comparison were done. The overall accuracy of the result of unsupervised classification using the band combinations Red-Edge, Green, Coastal blue & Red-edge, Yellow, Blue is 100%. The band combinations NIR-1, Green, Coastal blue & NIR-1, Yellow, Blue yielded an accuracy of 82.35 %. The existing vegetation indices and new hyper-spectral indices for the different type of AM communities were computed. Overall, results of this study suggest that high spectral and spatial resolution images provide useful information for natural resource managers especially with regard to the location identification and distribution mapping of macrophyte species and their communities.
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.
2007-02-02
First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms wheremore » the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly decrease. The average relative difference in peak height after piecewise alignment was –0.20%. Piecewise alignment caused the peak areas to either stay the same, slightly increase, or slightly decrease. The average absolute relative difference in area after piecewise alignment was 0.15%.« less
Tessem, May-Britt; Bathen, Tone F; Cejková, Jitka; Midelfart, Anna
2005-03-01
This study was conducted to investigate metabolic changes in aqueous humor from rabbit eyes exposed to either UV-A or -B radiation, by using (1)H nuclear magnetic resonance (NMR) spectroscopy and unsupervised pattern recognition methods. Both eyes of adult albino rabbits were irradiated with UV-A (366 nm, 0.589 J/cm(2)) or UV-B (312 nm, 1.667 J/cm(2)) radiation for 8 minutes, once a day for 5 days. Three days after the last irradiation, samples of aqueous humor were aspirated, and the metabolic profiles analyzed with (1)H NMR spectroscopy. The metabolic concentrations in the exposed and control materials were statistically analyzed and compared, with multivariate methods and one-way ANOVA. UV-B radiation caused statistically significant alterations of betaine, glucose, ascorbate, valine, isoleucine, and formate in the rabbit aqueous humor. By using principal component analysis, the UV-B-irradiated samples were clearly separated from the UV-A-irradiated samples and the control group. No significant metabolic changes were detected in UV-A-irradiated samples. This study demonstrates the potential of using unsupervised pattern recognition methods to extract valuable metabolic information from complex (1)H NMR spectra. UV-B irradiation of rabbit eyes led to significant metabolic changes in the aqueous humor detected 3 days after the last exposure.
Local Prediction Models on Mid-Atlantic Ridge MORB by Principal Component Regression
NASA Astrophysics Data System (ADS)
Ling, X.; Snow, J. E.; Chin, W.
2017-12-01
The isotopic compositions of the daughter isotopes of long-lived radioactive systems (Sr, Nd, Hf and Pb ) can be used to map the scale and history of mantle heterogeneities beneath mid-ocean ridges. Our goal is to relate the multidimensional structure in the existing isotopic dataset with an underlying physical reality of mantle sources. The numerical technique of Principal Component Analysis is useful to reduce the linear dependence of the data to a minimum set of orthogonal eigenvectors encapsulating the information contained (cf Agranier et al 2005). The dataset used for this study covers almost all the MORBs along mid-Atlantic Ridge (MAR), from 54oS to 77oN and 8.8oW to -46.7oW, including replicating the dataset of Agranier et al., 2005 published plus 53 basalt samples dredged and analyzed since then (data from PetDB). The principal components PC1 and PC2 account for 61.56% and 29.21%, respectively, of the total isotope ratios variability. The samples with similar compositions to HIMU and EM and DM are identified to better understand the PCs. PC1 and PC2 are accountable for HIMU and EM whereas PC2 has limited control over the DM source. PC3 is more strongly controlled by the depleted mantle source than PC2. What this means is that all three principal components have a high degree of significance relevant to the established mantle sources. We also tested the relationship between mantle heterogeneity and sample locality. K-means clustering algorithm is a type of unsupervised learning to find groups in the data based on feature similarity. The PC factor scores of each sample are clustered into three groups. Cluster one and three are alternating on the north and south MAR. Cluster two exhibits on 45.18oN to 0.79oN and -27.9oW to -30.40oW alternating with cluster one. The ridge has been preliminarily divided into 16 sections considering both the clusters and ridge segments. The principal component regression models the section based on 6 isotope ratios and PCs. The prediction residual is about 1-2km. It means that the combined 5 isotopes are a strong predictor of geographic location along the ridge, a slightly surprising result. PCR is a robust and powerful method for both visualizing and manipulating the multidimensional representation of isotope data.
Valdés, Julio J; Barton, Alan J
2007-05-01
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
Magagna, Federico; Guglielmetti, Alessandro; Liberto, Erica; Reichenbach, Stephen E; Allegrucci, Elena; Gobino, Guido; Bicchi, Carlo; Cordero, Chiara
2017-08-02
This study investigates chemical information of volatile fractions of high-quality cocoa (Theobroma cacao L. Malvaceae) from different origins (Mexico, Ecuador, Venezuela, Columbia, Java, Trinidad, and Sao Tomè) produced for fine chocolate. This study explores the evolution of the entire pattern of volatiles in relation to cocoa processing (raw, roasted, steamed, and ground beans). Advanced chemical fingerprinting (e.g., combined untargeted and targeted fingerprinting) with comprehensive two-dimensional gas chromatography coupled with mass spectrometry allows advanced pattern recognition for classification, discrimination, and sensory-quality characterization. The entire data set is analyzed for 595 reliable two-dimensional peak regions, including 130 known analytes and 13 potent odorants. Multivariate analysis with unsupervised exploration (principal component analysis) and simple supervised discrimination methods (Fisher ratios and linear regression trees) reveal informative patterns of similarities and differences and identify characteristic compounds related to sample origin and manufacturing step.
Bagur, M G; Morales, S; López-Chicano, M
2009-11-15
Unsupervised and supervised pattern recognition techniques such as hierarchical cluster analysis, principal component analysis, factor analysis and linear discriminant analysis have been applied to water samples recollected in Rodalquilar mining district (Southern Spain) in order to identify different sources of environmental pollution caused by the abandoned mining industry. The effect of the mining activity on waters was monitored determining the concentration of eleven elements (Mn, Ba, Co, Cu, Zn, As, Cd, Sb, Hg, Au and Pb) by inductively coupled plasma mass spectrometry (ICP-MS). The Box-Cox transformation has been used to transform the data set in normal form in order to minimize the non-normal distribution of the geochemical data. The environmental impact is affected mainly by the mining activity developed in the zone, the acid drainage and finally by the chemical treatment used for the benefit of gold.
Spectral analysis of stellar light curves by means of neural networks
NASA Astrophysics Data System (ADS)
Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.
1999-06-01
Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.
Di Girolamo, Francesco; Masotti, Andrea; Lante, Isabella; Scapaticci, Margherita; Calvano, Cosima Damiana; Zambonin, Carlo; Muraca, Maurizio; Putignani, Lorenza
2015-09-01
Extra virgin olive oil (EVOO) with its nutraceutical characteristics substantially contributes as a major nutrient to the health benefit of the Mediterranean diet. Unfortunately, the adulteration of EVOO with less expensive oils (e.g., peanut and corn oils), has become one of the biggest source of agricultural fraud in the European Union, with important health implications for consumers, mainly due to the introduction of seed oil-derived allergens causing, especially in children, severe food allergy phenomena. In this regard, revealing adulterations of EVOO is of fundamental importance for health care and prevention reasons, especially in children. To this aim, effective analytical methods to assess EVOO purity are necessary. Here, we propose a simple, rapid, robust and very sensitive method for non-specialized mass spectrometric laboratory, based on the matrix-assisted laser desorption/ionization mass spectrometry (MALDI-TOF MS) coupled to unsupervised hierarchical clustering (UHC), principal component (PCA) and Pearson's correlation analyses, to reveal corn oil (CO) adulterations in EVOO at very low levels (down to 0.5%).
Di Girolamo, Francesco; Masotti, Andrea; Lante, Isabella; Scapaticci, Margherita; Calvano, Cosima Damiana; Zambonin, Carlo; Muraca, Maurizio; Putignani, Lorenza
2015-01-01
Extra virgin olive oil (EVOO) with its nutraceutical characteristics substantially contributes as a major nutrient to the health benefit of the Mediterranean diet. Unfortunately, the adulteration of EVOO with less expensive oils (e.g., peanut and corn oils), has become one of the biggest source of agricultural fraud in the European Union, with important health implications for consumers, mainly due to the introduction of seed oil-derived allergens causing, especially in children, severe food allergy phenomena. In this regard, revealing adulterations of EVOO is of fundamental importance for health care and prevention reasons, especially in children. To this aim, effective analytical methods to assess EVOO purity are necessary. Here, we propose a simple, rapid, robust and very sensitive method for non-specialized mass spectrometric laboratory, based on the matrix-assisted laser desorption/ionization mass spectrometry (MALDI-TOF MS) coupled to unsupervised hierarchical clustering (UHC), principal component (PCA) and Pearson’s correlation analyses, to reveal corn oil (CO) adulterations in EVOO at very low levels (down to 0.5%). PMID:26340625
Anastasiadou, Maria N; Christodoulakis, Manolis; Papathanasiou, Eleftherios S; Papacostas, Savvas S; Mitsis, Georgios D
2017-09-01
This paper proposes supervised and unsupervised algorithms for automatic muscle artifact detection and removal from long-term EEG recordings, which combine canonical correlation analysis (CCA) and wavelets with random forests (RF). The proposed algorithms first perform CCA and continuous wavelet transform of the canonical components to generate a number of features which include component autocorrelation values and wavelet coefficient magnitude values. A subset of the most important features is subsequently selected using RF and labelled observations (supervised case) or synthetic data constructed from the original observations (unsupervised case). The proposed algorithms are evaluated using realistic simulation data as well as 30min epochs of non-invasive EEG recordings obtained from ten patients with epilepsy. We assessed the performance of the proposed algorithms using classification performance and goodness-of-fit values for noisy and noise-free signal windows. In the simulation study, where the ground truth was known, the proposed algorithms yielded almost perfect performance. In the case of experimental data, where expert marking was performed, the results suggest that both the supervised and unsupervised algorithm versions were able to remove artifacts without affecting noise-free channels considerably, outperforming standard CCA, independent component analysis (ICA) and Lagged Auto-Mutual Information Clustering (LAMIC). The proposed algorithms achieved excellent performance for both simulation and experimental data. Importantly, for the first time to our knowledge, we were able to perform entirely unsupervised artifact removal, i.e. without using already marked noisy data segments, achieving performance that is comparable to the supervised case. Overall, the results suggest that the proposed algorithms yield significant future potential for improving EEG signal quality in research or clinical settings without the need for marking by expert neurophysiologists, EMG signal recording and user visual inspection. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Bluestein, Blake M; Morrish, Fionnuala; Graham, Daniel J; Guenthoer, Jamie; Hockenbery, David; Porter, Peggy L; Gamble, Lara J
2016-03-21
Imaging time-of-flight secondary ion mass spectrometry (ToF-SIMS) and principal component analysis (PCA) were used to investigate two sets of pre- and post-chemotherapy human breast tumor tissue sections to characterize lipids associated with tumor metabolic flexibility and response to treatment. The micron spatial resolution imaging capability of ToF-SIMS provides a powerful approach to attain spatially-resolved molecular and cellular data from cancerous tissues not available with conventional imaging techniques. Three ca. 1 mm(2) areas per tissue section were analyzed by stitching together 200 μm × 200 μm raster area scans. A method to isolate and analyze specific tissue regions of interest by utilizing PCA of ToF-SIMS images is presented, which allowed separation of cellularized areas from stromal areas. These PCA-generated regions of interest were then used as masks to reconstruct representative spectra from specifically stromal or cellular regions. The advantage of this unsupervised selection method is a reduction in scatter in the spectral PCA results when compared to analyzing all tissue areas or analyzing areas highlighted by a pathologist. Utilizing this method, stromal and cellular regions of breast tissue biopsies taken pre- versus post-chemotherapy demonstrate chemical separation using negatively-charged ion species. In this sample set, the cellular regions were predominantly all cancer cells. Fatty acids (i.e. palmitic, oleic, and stearic), monoacylglycerols, diacylglycerols and vitamin E profiles were distinctively different between the pre- and post-therapy tissues. These results validate a new unsupervised method to isolate and interpret biochemically distinct regions in cancer tissues using imaging ToF-SIMS data. In addition, the method developed here can provide a framework to compare a variety of tissue samples using imaging ToF-SIMS, especially where there is section-to-section variability that makes it difficult to use a serial hematoxylin and eosin (H&E) stained section to direct the SIMS analysis.
Wang, Junping; Xie, Xinfang; Feng, Jinsong; Chen, Jessica C; Du, Xin-jun; Luo, Jiangzhao; Lu, Xiaonan; Wang, Shuo
2015-07-02
Listeria monocytogenes is a facultatively anaerobic, Gram-positive, rod-shape foodborne bacterium causing invasive infection, listeriosis, in susceptible populations. Rapid and high-throughput detection of this pathogen in dairy products is critical as milk and other dairy products have been implicated as food vehicles in several outbreaks. Here we evaluated confocal micro-Raman spectroscopy (785 nm laser) coupled with chemometric analysis to distinguish six closely related Listeria species, including L. monocytogenes, in both liquid media and milk. Raman spectra of different Listeria species and other bacteria (i.e., Staphylococcus aureus, Salmonella enterica and Escherichia coli) were collected to create two independent databases for detection in media and milk, respectively. Unsupervised chemometric models including principal component analysis and hierarchical cluster analysis were applied to differentiate L. monocytogenes from Listeria and other bacteria. To further evaluate the performance and reliability of unsupervised chemometric analyses, supervised chemometrics were performed, including two discriminant analyses (DA) and soft independent modeling of class analogies (SIMCA). By analyzing Raman spectra via two DA-based chemometric models, average identification accuracies of 97.78% and 98.33% for L. monocytogenes in media, and 95.28% and 96.11% in milk were obtained, respectively. SIMCA analysis also resulted in satisfied average classification accuracies (over 93% in both media and milk). This Raman spectroscopic-based detection of L. monocytogenes in media and milk can be finished within a few hours and requires no extensive sample preparation. Copyright © 2015 Elsevier B.V. All rights reserved.
Classification of earth terrain using polarimetric synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Lim, H. H.; Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Shin, R. T.; Van Zyl, J. J.
1989-01-01
Supervised and unsupervised classification techniques are developed and used to classify the earth terrain components from SAR polarimetric images of San Francisco Bay and Traverse City, Michigan. The supervised techniques include the Bayes classifiers, normalized polarimetric classification, and simple feature classification using discriminates such as the absolute and normalized magnitude response of individual receiver channel returns and the phase difference between receiver channels. An algorithm is developed as an unsupervised technique which classifies terrain elements based on the relationship between the orientation angle and the handedness of the transmitting and receiving polariation states. It is found that supervised classification produces the best results when accurate classifier training data are used, while unsupervised classification may be applied when training data are not available.
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Arevalo, John; Basavanhally, Ajay; Madabhushi, Anant; González, Fabio
2015-01-01
Learning data representations directly from the data itself is an approach that has shown great success in different pattern recognition problems, outperforming state-of-the-art feature extraction schemes for different tasks in computer vision, speech recognition and natural language processing. Representation learning applies unsupervised and supervised machine learning methods to large amounts of data to find building-blocks that better represent the information in it. Digitized histopathology images represents a very good testbed for representation learning since it involves large amounts of high complex, visual data. This paper presents a comparative evaluation of different supervised and unsupervised representation learning architectures to specifically address open questions on what type of learning architectures (deep or shallow), type of learning (unsupervised or supervised) is optimal. In this paper we limit ourselves to addressing these questions in the context of distinguishing between anaplastic and non-anaplastic medulloblastomas from routine haematoxylin and eosin stained images. The unsupervised approaches evaluated were sparse autoencoders and topographic reconstruct independent component analysis, and the supervised approach was convolutional neural networks. Experimental results show that shallow architectures with more neurons are better than deeper architectures without taking into account local space invariances and that topographic constraints provide useful invariant features in scale and rotations for efficient tumor differentiation.
Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.
Orbán, Levente L; Chartier, Sylvain
2015-01-01
Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.
Unsupervised Bayesian linear unmixing of gene expression microarrays.
Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O
2013-03-19
This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1985-01-01
The capabilities of TM data for discriminating land covers within three particular cultural and ecological realms was assessed. The agricultural investigation in Poinsett County, Arkansas illustrates that TM data can successfully be used to discriminate a variety of crop cover types within the study area. The single-date TM classification produced results that were significantly better than those developed from multitemporal MSS data. For the Reelfoot Lake area of Tennessee TM data, processed using unsupervised signature development techniques, produced a detailed classification of forested wetlands with excellent accuracy. Even in a small city of approximately 15,000 people (Union City, Tennessee). TM data can successfully be used to spectrally distinguish specific urban classes. Furthermore, the principal components analysis evaluation of the data shows that through photointerpretation, it is possible to distinguish individual buildings and roof responses with the TM.
Radosavljevic-Stevanovic, Natasa; Markovic, Jelena; Agatonovic-Kustrin, Snezana; Razic, Slavica
2014-01-01
Illicit production and trade of Cannabis sativa affect many societies. This drug is the most popular and easy to produce. Important information for the authorities is the production locality and the indicators of a particular production. This work is an attempt to recognise correlations between the metal content in the different parts of C. sativa L., in soils where plants were cultivated and the cannabinoids content, as a potential indicator. The organic fraction of the leaves of Cannabis plants was investigated by GC-FID analysis. In addition, the determination of Cu, Fe, Cr, Mn, Zn, Ca and Mg was realised by spectroscopic techniques (FAAS and GFAAS). In this study, numerous correlations between metal content in plants and soil, already confirmed in previous publications, were analysed applying chemometric unsupervised methods, that is, principal component analysis, factor analysis and cluster analysis, in order to highlight their role in the biosynthesis of cannabinoids.
Inferring the interplay between network structure and market effects in Bitcoin
NASA Astrophysics Data System (ADS)
Kondor, Dániel; Csabai, István; Szüle, János; Pósfai, Márton; Vattay, Gábor
2014-12-01
A main focus in economics research is understanding the time series of prices of goods and assets. While statistical models using only the properties of the time series itself have been successful in many aspects, we expect to gain a better understanding of the phenomena involved if we can model the underlying system of interacting agents. In this article, we consider the history of Bitcoin, a novel digital currency system, for which the complete list of transactions is available for analysis. Using this dataset, we reconstruct the transaction network between users and analyze changes in the structure of the subgraph induced by the most active users. Our approach is based on the unsupervised identification of important features of the time variation of the network. Applying the widely used method of Principal Component Analysis to the matrix constructed from snapshots of the network at different times, we are able to show how structural changes in the network accompany significant changes in the exchange price of bitcoins.
Anomaly Detection of Electromyographic Signals.
Ijaz, Ahsan; Choi, Jongeun
2018-04-01
In this paper, we provide a robust framework to detect anomalous electromyographic (EMG) signals and identify contamination types. As a first step for feature selection, optimally selected Lawton wavelets transform is applied. Robust principal component analysis (rPCA) is then performed on these wavelet coefficients to obtain features in a lower dimension. The rPCA based features are used for constructing a self-organizing map (SOM). Finally, hierarchical clustering is applied on the SOM that separates anomalous signals residing in the smaller clusters and breaks them into logical units for contamination identification. The proposed methodology is tested using synthetic and real world EMG signals. The synthetic EMG signals are generated using a heteroscedastic process mimicking desired experimental setups. A sub-part of these synthetic signals is introduced with anomalies. These results are followed with real EMG signals introduced with synthetic anomalies. Finally, a heterogeneous real world data set is used with known quality issues under an unsupervised setting. The framework provides recall of 90% (± 3.3) and precision of 99%(±0.4).
Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan
2014-10-01
Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.
de Carvalho Rocha, Werickson Fortunato; Schantz, Michele M.; Sheen, David A.; Chu, Pamela M.; Lippa, Katrice A.
2017-01-01
As feedstocks transition from conventional oil to unconventional petroleum sources and biomass, it will be necessary to determine whether a particular fuel or fuel blend is suitable for use in engines. Certifying a fuel as safe for use is time-consuming and expensive and must be performed for each new fuel. In principle, suitability of a fuel should be completely determined by its chemical composition. This composition can be probed through use of detailed analytical techniques such as gas chromatography-mass spectroscopy (GC-MS). In traditional analysis, chromatograms would be used to determine the details of the composition. In the approach taken in this paper, the chromatogram is assumed to be entirely representative of the composition of a fuel, and is used directly as the input to an algorithm in order to develop a model that is predictive of a fuel's suitability. When a new fuel is proposed for service, its suitability for any application could then be ascertained by using this model to compare its chromatogram with those of the fuels already known to be suitable for that application. In this paper, we lay the mathematical and informatics groundwork for a predictive model of hydrocarbon properties. The objective of this work was to develop a reliable model for unsupervised classification of the hydrocarbons as a prelude to developing a predictive model of their engine-relevant physical and chemical properties. A set of hydrocarbons including biodiesel fuels, gasoline, highway and marine diesel fuels, and crude oils was collected and GC-MS profiles obtained. These profiles were then analyzed using multi-way principal components analysis (MPCA), principal factors analysis (PARAFAC), and a self-organizing map (SOM), which is a kind of artificial neural network. It was found that, while MPCA and PARAFAC were able to recover descriptive models of the fuels, their linear nature obscured some of the finer physical details due to the widely varying composition of the fuels. The SOM was able to find a descriptive classification model which has the potential for practical recognition and perhaps prediction of fuel properties. PMID:28603295
NASA Astrophysics Data System (ADS)
Gjaja, Marin N.
1997-11-01
Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An unsupervised neural network model is proposed that embodies two principal hypotheses supported by experimental data--that sensory experience guides language-specific development of an auditory neural map and that a population vector can predict psychological phenomena based on map cell activities. Model simulations show how a nonuniform distribution of map cell firing preferences can develop from language-specific input and give rise to the magnet effect.
NASA Astrophysics Data System (ADS)
Karmakar, Mampi; Maiti, Saumen; Singh, Amrita; Ojha, Maheswar; Maity, Bhabani Sankar
2017-07-01
Modeling and classification of the subsurface lithology is very important to understand the evolution of the earth system. However, precise classification and mapping of lithology using a single framework are difficult due to the complexity and the nonlinearity of the problem driven by limited core sample information. Here, we implement a joint approach by combining the unsupervised and the supervised methods in a single framework for better classification and mapping of rock types. In the unsupervised method, we use the principal component analysis (PCA), K-means cluster analysis (K-means), dendrogram analysis, Fuzzy C-means (FCM) cluster analysis and self-organizing map (SOM). In the supervised method, we use the Bayesian neural networks (BNN) optimized by the Hybrid Monte Carlo (HMC) (BNN-HMC) and the scaled conjugate gradient (SCG) (BNN-SCG) techniques. We use P-wave velocity, density, neutron porosity, resistivity and gamma ray logs of the well U1343E of the Integrated Ocean Drilling Program (IODP) Expedition 323 in the Bering Sea slope region. While the SOM algorithm allows us to visualize the clustering results in spatial domain, the combined classification schemes (supervised and unsupervised) uncover the different patterns of lithology such of as clayey-silt, diatom-silt and silty-clay from an un-cored section of the drilled hole. In addition, the BNN approach is capable of estimating uncertainty in the predictive modeling of three types of rocks over the entire lithology section at site U1343. Alternate succession of clayey-silt, diatom-silt and silty-clay may be representative of crustal inhomogeneity in general and thus could be a basis for detail study related to the productivity of methane gas in the oceans worldwide. Moreover, at the 530 m depth down below seafloor (DSF), the transition from Pliocene to Pleistocene could be linked to lithological alternation between the clayey-silt and the diatom-silt. The present results could provide the basis for the detailed study to get deeper insight into the Bering Sea' sediment deposition and sequence.
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-01-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. PMID:21708116
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
Yang, Guang; Raschke, Felix; Barrick, Thomas R; Howe, Franklyn A
2015-09-01
To investigate whether nonlinear dimensionality reduction improves unsupervised classification of (1) H MRS brain tumor data compared with a linear method. In vivo single-voxel (1) H magnetic resonance spectroscopy (55 patients) and (1) H magnetic resonance spectroscopy imaging (MRSI) (29 patients) data were acquired from histopathologically diagnosed gliomas. Data reduction using Laplacian eigenmaps (LE) or independent component analysis (ICA) was followed by k-means clustering or agglomerative hierarchical clustering (AHC) for unsupervised learning to assess tumor grade and for tissue type segmentation of MRSI data. An accuracy of 93% in classification of glioma grade II and grade IV, with 100% accuracy in distinguishing tumor and normal spectra, was obtained by LE with unsupervised clustering, but not with the combination of k-means and ICA. With (1) H MRSI data, LE provided a more linear distribution of data for cluster analysis and better cluster stability than ICA. LE combined with k-means or AHC provided 91% accuracy for classifying tumor grade and 100% accuracy for identifying normal tissue voxels. Color-coded visualization of normal brain, tumor core, and infiltration regions was achieved with LE combined with AHC. The LE method is promising for unsupervised clustering to separate brain and tumor tissue with automated color-coding for visualization of (1) H MRSI data after cluster analysis. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana
2015-06-01
Low-contrast images, such as color microscopic images of unstained histological specimens, are composed of objects with highly correlated spectral profiles. Such images are very hard to segment. Here, we present a method that nonlinearly maps low-contrast color image into an image with an increased number of non-physical channels and a decreased correlation between spectral profiles. The method is a proof-of-concept validated on the unsupervised segmentation of color images of unstained specimens, in which case the tissue components appear colorless when viewed under the light microscope. Specimens of human hepatocellular carcinoma, human liver with metastasis from colon and gastric cancer and mouse fatty liver were used for validation. The average correlation between the spectral profiles of the tissue components was greater than 0.9985, and the worst case correlation was greater than 0.9997. The proposed method can potentially be applied to the segmentation of low-contrast multichannel images with high spatial resolution that arise in other imaging modalities.
Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation
Spratling, M. W.; De Meyer, K.; Kompass, R.
2009-01-01
This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance. PMID:19424442
Taguchi, Y-h
2015-01-01
Transgenerational epigenetics (TGE) are currently considered important in disease, but the mechanisms involved are not yet fully understood. TGE abnormalities expected to cause disease are likely to be initiated during development and to be mediated by aberrant gene expression associated with aberrant promoter methylation that is heritable between generations. However, because methylation is removed and then re-established during development, it is not easy to identify promoter methylation abnormalities by comparing normal lineages with those expected to exhibit TGE abnormalities. This study applied the recently proposed principal component analysis (PCA)-based unsupervised feature extraction to previously reported and publically available gene expression/promoter methylation profiles of rat primordial germ cells, between E13 and E16 of the F3 generation vinclozolin lineage that are expected to exhibit TGE abnormalities, to identify multiple genes that exhibited aberrant gene expression/promoter methylation during development. The biological feasibility of the identified genes were tested via enrichment analyses of various biological concepts including pathway analysis, gene ontology terms and protein-protein interactions. All validations suggested superiority of the proposed method over three conventional and popular supervised methods that employed t test, limma and significance analysis of microarrays, respectively. The identified genes were globally related to tumors, the prostate, kidney, testis and the immune system and were previously reported to be related to various diseases caused by TGE. Among the genes reported by PCA-based unsupervised feature extraction, we propose that chemokine signaling pathways and leucine rich repeat proteins are key factors that initiate transgenerational epigenetic-mediated diseases, because multiple genes included in these two categories were identified in this study.
Supervised detection of exoplanets in high-contrast imaging sequences
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.
2018-06-01
Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve the demographics of directly imaged exoplanets.
2015-01-01
Background Transgenerational epigenetics (TGE) are currently considered important in disease, but the mechanisms involved are not yet fully understood. TGE abnormalities expected to cause disease are likely to be initiated during development and to be mediated by aberrant gene expression associated with aberrant promoter methylation that is heritable between generations. However, because methylation is removed and then re-established during development, it is not easy to identify promoter methylation abnormalities by comparing normal lineages with those expected to exhibit TGE abnormalities. Methods This study applied the recently proposed principal component analysis (PCA)-based unsupervised feature extraction to previously reported and publically available gene expression/promoter methylation profiles of rat primordial germ cells, between E13 and E16 of the F3 generation vinclozolin lineage that are expected to exhibit TGE abnormalities, to identify multiple genes that exhibited aberrant gene expression/promoter methylation during development. Results The biological feasibility of the identified genes were tested via enrichment analyses of various biological concepts including pathway analysis, gene ontology terms and protein-protein interactions. All validations suggested superiority of the proposed method over three conventional and popular supervised methods that employed t test, limma and significance analysis of microarrays, respectively. The identified genes were globally related to tumors, the prostate, kidney, testis and the immune system and were previously reported to be related to various diseases caused by TGE. Conclusions Among the genes reported by PCA-based unsupervised feature extraction, we propose that chemokine signaling pathways and leucine rich repeat proteins are key factors that initiate transgenerational epigenetic-mediated diseases, because multiple genes included in these two categories were identified in this study. PMID:26677731
Warren, Frederick J; Perston, Benjamin B; Galindez-Najera, Silvia P; Edwards, Cathrina H; Powell, Prudence O; Mandalari, Giusy; Campbell, Grant M; Butterworth, Peter J; Ellis, Peter R
2015-01-01
Infrared microspectroscopy is a tool with potential for studies of the microstructure, chemical composition and functionality of plants at a subcellular level. Here we present the use of high-resolution bench top-based infrared microspectroscopy to investigate the microstructure of Triticum aestivum L. (wheat) kernels and Arabidopsis leaves. Images of isolated wheat kernel tissues and whole wheat kernels following hydrothermal processing and simulated gastric and duodenal digestion were generated, as well as images of Arabidopsis leaves at different points during a diurnal cycle. Individual cells and cell walls were resolved, and large structures within cells, such as starch granules and protein bodies, were clearly identified. Contrast was provided by converting the hyperspectral image cubes into false-colour images using either principal component analysis (PCA) overlays or by correlation analysis. The unsupervised PCA approach provided a clear view of the sample microstructure, whereas the correlation analysis was used to confirm the identity of different anatomical structures using the spectra from isolated components. It was then demonstrated that gelatinized and native starch within cells could be distinguished, and that the loss of starch during wheat digestion could be observed, as well as the accumulation of starch in leaves during a diurnal period. PMID:26400058
Prasain, J K; Wilson, L S; Arabshahi, A; Grubbs, C; Barnes, S
2017-08-01
The purpose of this study was to investigate the effect of radiation on the content of animal diet constituents using global metabolomics. Aqueous methanolic extracts of control and cobalt-60-irradiated Teklad 7001 diets were comprehensively analyzed using nano-liquid chromatography-MS/MS. Among the over 2000 ions revealed by XCMS followed by data preprocessing, 94 positive and 143 negative metabolite ions had greater than 1.5-fold changes and p-values <0.01. Use of MetaboAnalyst statistical software demonstrated complete separation of the irradiated and non-radiated diets in unsupervised principal components analysis and supervised partial least squares discriminant analysis. Irradiation led to an increase in the content of phytochemicals such as glucosinolates and oxidized lipids in the diet. Twenty-eight metabolites that were significantly changed in the irradiated samples were putatively identified at the level of molecular formulae by MS/MS. MS/MS ALL analysis of chloroform-methanol extracts of the irradiated diet showed increased levels of a number of unique linoleic acid-derived branched fatty acid esters of hydroxy fatty acids. These data imply that gamma irradiation of animal diets causes chemical changes to dietary components, which in turn may influence the risk of mammary cancer. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-08-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. Copyright © 2011 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
Jiang, Jheng Jie; Lee, Chon Lin; Fang, Meng Der; Boyd, Kenneth G.; Gibb, Stuart W.
2015-01-01
This paper presents a methodology based on multivariate data analysis for characterizing potential source contributions of emerging contaminants (ECs) detected in 26 river water samples across multi-scape regions during dry and wet seasons. Based on this methodology, we unveil an approach toward potential source contributions of ECs, a concept we refer to as the “Pharmaco-signature.” Exploratory analysis of data points has been carried out by unsupervised pattern recognition (hierarchical cluster analysis, HCA) and receptor model (principal component analysis-multiple linear regression, PCA-MLR) in an attempt to demonstrate significant source contributions of ECs in different land-use zone. Robust cluster solutions grouped the database according to different EC profiles. PCA-MLR identified that 58.9% of the mean summed ECs were contributed by domestic impact, 9.7% by antibiotics application, and 31.4% by drug abuse. Diclofenac, ibuprofen, codeine, ampicillin, tetracycline, and erythromycin-H2O have significant pollution risk quotients (RQ>1), indicating potentially high risk to aquatic organisms in Taiwan. PMID:25874375
Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.
Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757
Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar
2018-01-15
Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Self-Organizing Incremental Neural Network based on local distribution learning.
Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi
2016-12-01
In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chakraborty, Somsubhra; Weindorf, David C; Li, Bin; Ali Aldabaa, Abdalsamad Abdalsatar; Ghosh, Rakesh Kumar; Paul, Sathi; Nasim Ali, Md
2015-05-01
Using 108 petroleum contaminated soil samples, this pilot study proposed a new analytical approach of combining visible near-infrared diffuse reflectance spectroscopy (VisNIR DRS) and portable X-ray fluorescence spectrometry (PXRF) for rapid and improved quantification of soil petroleum contamination. Results indicated that an advanced fused model where VisNIR DRS spectra-based penalized spline regression (PSR) was used to predict total petroleum hydrocarbon followed by PXRF elemental data-based random forest regression was used to model the PSR residuals, it outperformed (R(2)=0.78, residual prediction deviation (RPD)=2.19) all other models tested, even producing better generalization than using VisNIR DRS alone (RPD's of 1.64, 1.86, and 1.96 for random forest, penalized spline regression, and partial least squares regression, respectively). Additionally, unsupervised principal component analysis using the PXRF+VisNIR DRS system qualitatively separated contaminated soils from control samples. Fusion of PXRF elemental data and VisNIR derivative spectra produced an optimized model for total petroleum hydrocarbon quantification in soils. Copyright © 2015 Elsevier B.V. All rights reserved.
Diomande, Didier; Antheaume, Ingrid; Leroux, Maël; Lalande, Julie; Balayssac, Stéphane; Remaud, Gérald S; Tea, Illa
2015-12-01
Multi-element stable isotope ratios have been assessed as a means to distinguish between fermented cocoa beans from different geographical and varietal origins. Isotope ratios and percentage composition for C and N were measured in different tissues (cotyledons, shells) and extracts (pure theobromine, defatted cocoa solids, protein, lipids) obtained from fermented cocoa bean samples. Sixty-one samples from 24 different geographical origins covering all four continental areas producing cocoa were analyzed. Treatment of the data with unsupervised (Principal Component Analysis) and supervised (Partial Least Squares Discriminant Analysis) multiparametric statistical methods allowed the cocoa beans from different origins to be distinguished. The most discriminant variables identified as responsible for geographical and varietal differences were the δ(15)N and δ(13)C values of cocoa beans and some extracts and tissues. It can be shown that the isotope ratios are correlated with the altitude and precipitation conditions found in the different cocoa-growing regions. Copyright © 2015 Elsevier Ltd. All rights reserved.
GLS-Finder: A Platform for Fast Profiling of Glucosinolates in Brassica Vegetables.
Sun, Jianghao; Zhang, Mengliang; Chen, Pei
2016-06-01
Mass spectrometry combined with related tandem techniques has become the most popular method for plant secondary metabolite characterization. We introduce a new strategy based on in-database searching, mass fragmentation behavior study, formula predicting for fast profiling of glucosinolates, a class of important compounds in brassica vegetables. A MATLAB script-based expert system computer program, "GLS-Finder", was developed. It is capable of qualitative and semi-quantitative analyses of glucosinolates in samples using data generated by ultrahigh-performance liquid chromatography-high-resolution accurate mass with multi-stage mass fragmentation (UHPLC-HRAM/MS(n)). A suite of bioinformatic tools was integrated into the "GLS-Finder" to perform raw data deconvolution, peak alignment, glucosinolate putative assignments, semi-quantitation, and unsupervised principal component analysis (PCA). GLS-Finder was successfully applied to identify intact glucosinolates in 49 commonly consumed Brassica vegetable samples in the United States. It is believed that this work introduces a new way of fast data processing and interpretation for qualitative and quantitative analyses of glucosinolates, where great efficacy was improved in comparison to identification manually.
Spike sorting based upon machine learning algorithms (SOMA).
Horton, P M; Nicol, A U; Kendrick, K M; Feng, J F
2007-02-15
We have developed a spike sorting method, using a combination of various machine learning algorithms, to analyse electrophysiological data and automatically determine the number of sampled neurons from an individual electrode, and discriminate their activities. We discuss extensions to a standard unsupervised learning algorithm (Kohonen), as using a simple application of this technique would only identify a known number of clusters. Our extra techniques automatically identify the number of clusters within the dataset, and their sizes, thereby reducing the chance of misclassification. We also discuss a new pre-processing technique, which transforms the data into a higher dimensional feature space revealing separable clusters. Using principal component analysis (PCA) alone may not achieve this. Our new approach appends the features acquired using PCA with features describing the geometric shapes that constitute a spike waveform. To validate our new spike sorting approach, we have applied it to multi-electrode array datasets acquired from the rat olfactory bulb, and from the sheep infero-temporal cortex, and using simulated data. The SOMA sofware is available at http://www.sussex.ac.uk/Users/pmh20/spikes.
Washio, Kana; Oka, Takashi; Abdalkader, Lamia; Muraoka, Michiko; Shimada, Akira; Oda, Megumi; Sato, Hiaki; Takata, Katsuyoshi; Kagami, Yoshitoyo; Shimizu, Norio; Kato, Seiichi; Kimura, Hiroshi; Nishizaki, Kazunori; Yoshino, Tadashi; Tsukahara, Hirokazu
2017-11-01
The human herpes virus, Epstein-Barr virus (EBV), is a known oncogenic virus and plays important roles in life-threatening T/NK-cell lymphoproliferative disorders (T/NK-cell LPD) such as hypersensitivity to mosquito bite (HMB), chronic active EBV infection (CAEBV), and NK/T-cell lymphoma/leukemia. During the clinical courses of HMB and CAEBV, patients frequently develop malignant lymphomas and the diseases passively progress sequentially. In the present study, gene expression of CD16 (-) CD56 (+) -, EBV (+) HMB, CAEBV, NK-lymphoma, and NK-leukemia cell lines, which were established from patients, was analyzed using oligonucleotide microarrays and compared to that of CD56 bright CD16 dim/- NK cells from healthy donors. Principal components analysis showed that CAEBV and NK-lymphoma cells were relatively closely located, indicating that they had similar expression profiles. Unsupervised hierarchal clustering analyses of microarray data and gene ontology analysis revealed specific gene clusters and identified several candidate genes responsible for disease that can be used to discriminate each category of NK-LPD and NK-cell lymphoma/leukemia.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M
2015-01-01
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.
Discovery of Deep Structure from Unlabeled Data
2014-11-01
GPU processors . To evaluate the unsupervised learning component of the algorithms (which has become of less importance in the era of “big data...representations to those in biological visual, auditory, and somatosensory cortex ; and ran numerous control experiments investigating the impact of
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.
2015-01-01
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453
Warren, Frederick J; Perston, Benjamin B; Galindez-Najera, Silvia P; Edwards, Cathrina H; Powell, Prudence O; Mandalari, Giusy; Campbell, Grant M; Butterworth, Peter J; Ellis, Peter R
2015-11-01
Infrared microspectroscopy is a tool with potential for studies of the microstructure, chemical composition and functionality of plants at a subcellular level. Here we present the use of high-resolution bench top-based infrared microspectroscopy to investigate the microstructure of Triticum aestivum L. (wheat) kernels and Arabidopsis leaves. Images of isolated wheat kernel tissues and whole wheat kernels following hydrothermal processing and simulated gastric and duodenal digestion were generated, as well as images of Arabidopsis leaves at different points during a diurnal cycle. Individual cells and cell walls were resolved, and large structures within cells, such as starch granules and protein bodies, were clearly identified. Contrast was provided by converting the hyperspectral image cubes into false-colour images using either principal component analysis (PCA) overlays or by correlation analysis. The unsupervised PCA approach provided a clear view of the sample microstructure, whereas the correlation analysis was used to confirm the identity of different anatomical structures using the spectra from isolated components. It was then demonstrated that gelatinized and native starch within cells could be distinguished, and that the loss of starch during wheat digestion could be observed, as well as the accumulation of starch in leaves during a diurnal period. © 2015 The Authors The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Analysis of the mutations induced by conazole fungicides in vivo.
Ross, Jeffrey A; Leavitt, Sharon A
2010-05-01
The mouse liver tumorigenic conazole fungicides triadimefon and propiconazole have previously been shown to be in vivo mouse liver mutagens in the Big Blue transgenic mutation assay when administered in feed at tumorigenic doses, whereas the non-tumorigenic conazole myclobutanil was not mutagenic. DNA sequencing of the mutants recovered from each treatment group as well as from animals receiving control diet was conducted to gain additional insight into the mode of action by which tumorigenic conazoles induce mutations. Relative dinucleotide mutabilities (RDMs) were calculated for each possible dinucleotide in each treatment group and then examined by multivariate statistical analysis techniques. Unsupervised hierarchical clustering analysis of RDM values segregated two independent control groups together, along with the non-tumorigen myclobutanil. The two tumorigenic conazoles clustered together in a distinct grouping. Partitioning around mediods of RDM values into two clusters also groups the triadimefon and propiconazole together in one cluster and the two control groups and myclobutanil together in a second cluster. Principal component analysis of these results identifies two components that account for 88.3% of the variability in the points. Taken together, these results are consistent with the hypothesis that propiconazole- and triadimefon-induced mutations do not represent clonal expansion of background mutations and support the hypothesis that they arise from the accumulation of reactive electrophilic metabolic intermediates within the liver in vivo.
Mat-Desa, Wan N S; Ismail, Dzulkiflee; NicDaeid, Niamh
2011-10-15
Three different medium petroleum distillate (MPD) products (white spirit, paint brush cleaner, and lamp oil) were purchased from commercial stores in Glasgow, Scotland. Samples of 10, 25, 50, 75, 90, and 95% evaporated product were prepared, resulting in 56 samples in total which were analyzed using gas chromatography-mass spectrometry. Data sets from the chromatographic patterns were examined and preprocessed for unsupervised multivariate analyses using principal component analysis (PCA), hierarchical cluster analysis (HCA), and a self organizing feature map (SOFM) artificial neural network. It was revealed that data sets comprised of higher boiling point hydrocarbon compounds provided a good means for the classification of the samples and successfully linked highly weathered samples back to their unevaporated counterpart in every case. The classification abilities of SOFM were further tested and validated for their predictive abilities where one set of weather data in each case was withdrawn from the sample set and used as a test set of the retrained network. This revealed SOFM to be an outstanding mechanism for sample discrimination and linkage over the more conventional PCA and HCA methods often suggested for such data analysis. SOFM also has the advantage of providing additional information through the evaluation of component planes facilitating the investigation of underlying variables that account for the classification. © 2011 American Chemical Society
Widlak, Piotr; Mrukwa, Grzegorz; Kalinowska, Magdalena; Pietrowska, Monika; Chekan, Mykola; Wierzgon, Janusz; Gawin, Marta; Drazek, Grzegorz; Polanska, Joanna
2016-06-01
Intra-tumor heterogeneity is a vivid problem of molecular oncology that could be addressed by imaging mass spectrometry. Here we aimed to assess molecular heterogeneity of oral squamous cell carcinoma and to detect signatures discriminating normal and cancerous epithelium. Tryptic peptides were analyzed by MALDI-IMS in tissue specimens from five patients with oral cancer. Novel algorithm of IMS data analysis was developed and implemented, which included Gaussian mixture modeling for detection of spectral components and iterative k-means algorithm for unsupervised spectra clustering performed in domain reduced to a subset of the most dispersed components. About 4% of the detected peptides showed significantly different abundances between normal epithelium and tumor, and could be considered as a molecular signature of oral cancer. Moreover, unsupervised clustering revealed two major sub-regions within expert-defined tumor areas. One of them showed molecular similarity with histologically normal epithelium. The other one showed similarity with connective tissue, yet was markedly different from normal epithelium. Pathologist's re-inspection of tissue specimens confirmed distinct features in both tumor sub-regions: foci of actual cancer cells or cancer microenvironment-related cells prevailed in corresponding areas. Hence, molecular differences detected during automated segmentation of IMS data had an apparent reflection in real structures present in tumor. © 2016 The Authors. Proteomics Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
Linear unmixing of multidate hyperspectral imagery for crop yield estimation
USDA-ARS?s Scientific Manuscript database
In this paper, we have evaluated an unsupervised unmixing approach, vertex component analysis (VCA), for the application of crop yield estimation. The results show that abundance maps of the vegetation extracted by the approach are strongly correlated to the yield data (the correlation coefficients ...
A semi-supervised classification algorithm using the TAD-derived background as training data
NASA Astrophysics Data System (ADS)
Fan, Lei; Ambeau, Brittany; Messinger, David W.
2013-05-01
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827
ERIC Educational Resources Information Center
Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.
2013-01-01
Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…
Pati, Sumati; Maity, A; Banerji, P; Majumder, S B
2014-04-07
In the present work we have grown highly textured, ultra-thin, nano-crystalline zinc oxide thin films using a metal organic chemical vapor deposition technique and addressed their selectivity towards hydrogen, carbon dioxide and methane gas sensing. Structural and microstructural characteristics of the synthesized films were investigated utilizing X-ray diffraction and electron microscopy techniques respectively. Using a dynamic flow gas sensing measurement set up, the sensing characteristics of these films were investigated as a function of gas concentration (10-1660 ppm) and operating temperature (250-380 °C). ZnO thin film sensing elements were found to be sensitive to all of these gases. Thus at a sensor operating temperature of ~300 °C, the response% of the ZnO thin films were ~68, 59, and 52% for hydrogen, carbon monoxide and methane gases respectively. The data matrices extracted from first Fourier transform analyses (FFT) of the conductance transients were used as input parameters in a linear unsupervised principal component analysis (PCA) pattern recognition technique. We have demonstrated that FFT combined with PCA is an excellent tool for the differentiation of these reducing gases.
Assessment of self-organizing maps to analyze sole-carbon source utilization profiles.
Leflaive, Joséphine; Céréghino, Régis; Danger, Michaël; Lacroix, Gérard; Ten-Hage, Loïc
2005-07-01
The use of community-level physiological profiles obtained with Biolog microplates is widely employed to consider the functional diversity of bacterial communities. Biolog produces a great amount of data which analysis has been the subject of many studies. In most cases, after some transformations, these data were investigated with classical multivariate analyses. Here we provided an alternative to this method, that is the use of an artificial intelligence technique, the Self-Organizing Maps (SOM, unsupervised neural network). We used data from a microcosm study of algae-associated bacterial communities placed in various nutritive conditions. Analyses were carried out on the net absorbances at two incubation times for each substrates and on the chemical guild categorization of the total bacterial activity. Compared to Principal Components Analysis and cluster analysis, SOM appeared as a valuable tool for community classification, and to establish clear relationships between clusters of bacterial communities and sole-carbon sources utilization. Specifically, SOM offered a clear bidimensional projection of a relatively large volume of data and were easier to interpret than plots commonly obtained with multivariate analyses. They would be recommended to pattern the temporal evolution of communities' functional diversity.
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Floris, Patrick; McGillicuddy, Nicola; Albrecht, Simone; Morrissey, Brian; Kaisermayer, Christian; Lindeberg, Anna; Bones, Jonathan
2017-09-19
An untargeted LC-MS/MS platform was implemented for monitoring variations in CHO cell culture media upon exposure to high temperature short time (HTST) treatment, a commonly used viral clearance upstream strategy. Chemically defined (CD) and hydrolysate-supplemented media formulations were not visibly altered by the treatment. The absence of solute precipitation effects during media treatment and very modest shifts in pH values observed indicated sufficient compatibility of the formulations evaluated with the HTST-processing conditions. Unsupervised chemometric analysis of LC-MS/MS data, however, revealed clear separation of HTST-treated samples from untreated counterparts as observed from analysis of principal components and hierarchical clustering sample grouping. An increased presence of Maillard products in HTST-treated formulations contributed to the observed differences which included organic acids, observed particularly in chemically defined formulations, and furans, pyridines, pyrazines, and pyrrolidines which were determined in hydrolysate-supplemented formulations. The presence of Maillard products in media did not affect cell culture performance with similar growth and viability profiles observed for CHO-K1 and CHO-DP12 cells when cultured using both HTST-treated and untreated media formulations.
Automatic age and gender classification using supervised appearance model
NASA Astrophysics Data System (ADS)
Bukar, Ali Maina; Ugail, Hassan; Connah, David
2016-11-01
Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.
Sánchez-Peña, Carolina M; Luna, Guadalupe; García-González, Diego L; Aparicio, Ramón
2005-04-01
The influence of the volatile compounds on the characterization of Spanish and French dry-cured hams was studied. Thirty volatiles were quantified in each one of four locations (biceps femoris, semimembranosus and semitendinosus muscles and subcutaneous fat) of 29 dry-cured hams by solid-phase microextraction gas-chromatography (SPME-GC). The Brown-Forsythe univariate test allowed determination of the volatiles that individually could characterize (p<0.05) the samples by their geographical origin (France, Spain) and breed type (Iberian, white). Stepwise linear discriminant procedure, under very strict conditions (F-to-Enter for a F-distribution>0.95), then selected the most remarkable volatile compounds. Four compounds from the subcutaneous fat (methyl benzene and octanol) and the semitendinosus muscle (2-butanone and 2-octanone) allowed 100% correct classifications by geographic origin. On the other hand, only two compounds from the subcutaneous fat (octanol) and the biceps femoris muscle (3-methyl 1-butanol) correctly classified all the samples by the breed type. The ability of these variables to classify the samples was checked by the unsupervised procedure of principal components.
Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.
1981-01-01
A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors
On the robustness of EC-PC spike detection method for online neural recording.
Zhou, Yin; Wu, Tong; Rastegarnia, Amir; Guan, Cuntai; Keefer, Edward; Yang, Zhi
2014-09-30
Online spike detection is an important step to compress neural data and perform real-time neural information decoding. An unsupervised, automatic, yet robust signal processing is strongly desired, thus it can support a wide range of applications. We have developed a novel spike detection algorithm called "exponential component-polynomial component" (EC-PC) spike detection. We firstly evaluate the robustness of the EC-PC spike detector under different firing rates and SNRs. Secondly, we show that the detection Precision can be quantitatively derived without requiring additional user input parameters. We have realized the algorithm (including training) into a 0.13 μm CMOS chip, where an unsupervised, nonparametric operation has been demonstrated. Both simulated data and real data are used to evaluate the method under different firing rates (FRs), SNRs. The results show that the EC-PC spike detector is the most robust in comparison with some popular detectors. Moreover, the EC-PC detector can track changes in the background noise due to the ability to re-estimate the neural data distribution. Both real and synthesized data have been used for testing the proposed algorithm in comparison with other methods, including the absolute thresholding detector (AT), median absolute deviation detector (MAD), nonlinear energy operator detector (NEO), and continuous wavelet detector (CWD). Comparative testing results reveals that the EP-PC detection algorithm performs better than the other algorithms regardless of recording conditions. The EC-PC spike detector can be considered as an unsupervised and robust online spike detection. It is also suitable for hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Dong-Youl; Lee, Jong-Hwan
2014-05-01
A data-driven unsupervised learning such as an independent component analysis was gainfully applied to bloodoxygenation- level-dependent (BOLD) functional magnetic resonance imaging (fMRI) data compared to a model-based general linear model (GLM). This is due to an ability of this unsupervised learning method to extract a meaningful neuronal activity from BOLD signal that is a mixture of confounding non-neuronal artifacts such as head motions and physiological artifacts as well as neuronal signals. In this study, we support this claim by identifying neuronal underpinnings of cigarette craving and cigarette resistance. The fMRI data were acquired from heavy cigarette smokers (n = 14) while they alternatively watched images with and without cigarette smoking. During acquisition of two fMRI runs, they were asked to crave when they watched cigarette smoking images or to resist the urge to smoke. Data driven approaches of group independent component analysis (GICA) method based on temporal concatenation (TC) and TCGICA with an extension of iterative dual-regression (TC-GICA-iDR) were applied to the data. From the results, cigarette craving and cigarette resistance related neuronal activations were identified in the visual area and superior frontal areas, respectively with a greater statistical significance from the TC-GICA-iDR method than the TC-GICA method. On the other hand, the neuronal activity levels in many of these regions were not statistically different from the GLM method between the cigarette craving and cigarette resistance due to potentially aberrant BOLD signals.
NASA Astrophysics Data System (ADS)
Ratha, Debanshu; Bhattacharya, Avik; Frery, Alejandro C.
2018-01-01
In this letter, we propose a novel technique for obtaining scattering components from Polarimetric Synthetic Aperture Radar (PolSAR) data using the geodesic distance on the unit sphere. This geodesic distance is obtained between an elementary target and the observed Kennaugh matrix, and it is further utilized to compute a similarity measure between scattering mechanisms. The normalized similarity measure for each elementary target is then modulated with the total scattering power (Span). This measure is used to categorize pixels into three categories i.e. odd-bounce, double-bounce and volume, depending on which of the above scattering mechanisms dominate. Then the maximum likelihood classifier of [J.-S. Lee, M. R. Grunes, E. Pottier, and L. Ferro-Famil, Unsupervised terrain classification preserving polarimetric scattering characteristics, IEEE Trans. Geos. Rem. Sens., vol. 42, no. 4, pp. 722731, April 2004.] based on the complex Wishart distribution is iteratively used for each category. Dominant scattering mechanisms are thus preserved in this classification scheme. We show results for L-band AIRSAR and ALOS-2 datasets acquired over San Francisco and Mumbai, respectively. The scattering mechanisms are better preserved using the proposed methodology than the unsupervised classification results using the Freeman-Durden scattering powers on an orientation angle (OA) corrected PolSAR image. Furthermore, (1) the scattering similarity is a completely non-negative quantity unlike the negative powers that might occur in double- bounce and odd-bounce scattering component under Freeman Durden decomposition (FDD), and (2) the methodology can be extended to more canonical targets as well as for bistatic scattering.
Leibig, Christian; Wachtler, Thomas; Zeck, Günther
2016-09-15
Unsupervised identification of action potentials in multi-channel extracellular recordings, in particular from high-density microelectrode arrays with thousands of sensors, is an unresolved problem. While independent component analysis (ICA) achieves rapid unsupervised sorting, it ignores the convolutive structure of extracellular data, thus limiting the unmixing to a subset of neurons. Here we present a spike sorting algorithm based on convolutive ICA (cICA) to retrieve a larger number of accurately sorted neurons than with instantaneous ICA while accounting for signal overlaps. Spike sorting was applied to datasets with varying signal-to-noise ratios (SNR: 3-12) and 27% spike overlaps, sampled at either 11.5 or 23kHz on 4365 electrodes. We demonstrate how the instantaneity assumption in ICA-based algorithms has to be relaxed in order to improve the spike sorting performance for high-density microelectrode array recordings. Reformulating the convolutive mixture as an instantaneous mixture by modeling several delayed samples jointly is necessary to increase signal-to-noise ratio. Our results emphasize that different cICA algorithms are not equivalent. Spike sorting performance was assessed with ground-truth data generated from experimentally derived templates. The presented spike sorter was able to extract ≈90% of the true spike trains with an error rate below 2%. It was superior to two alternative (c)ICA methods (≈80% accurately sorted neurons) and comparable to a supervised sorting. Our new algorithm represents a fast solution to overcome the current bottleneck in spike sorting of large datasets generated by simultaneous recording with thousands of electrodes. Copyright © 2016 Elsevier B.V. All rights reserved.
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets.
Argelaguet, Ricard; Velten, Britta; Arnol, Damien; Dietrich, Sascha; Zenz, Thorsten; Marioni, John C; Buettner, Florian; Huber, Wolfgang; Stegle, Oliver
2018-06-20
Multi-omics studies promise the improved characterization of biological processes across molecular layers. However, methods for the unsupervised integration of the resulting heterogeneous data sets are lacking. We present Multi-Omics Factor Analysis (MOFA), a computational method for discovering the principal sources of variation in multi-omics data sets. MOFA infers a set of (hidden) factors that capture biological and technical sources of variability. It disentangles axes of heterogeneity that are shared across multiple modalities and those specific to individual data modalities. The learnt factors enable a variety of downstream analyses, including identification of sample subgroups, data imputation and the detection of outlier samples. We applied MOFA to a cohort of 200 patient samples of chronic lymphocytic leukaemia, profiled for somatic mutations, RNA expression, DNA methylation and ex vivo drug responses. MOFA identified major dimensions of disease heterogeneity, including immunoglobulin heavy-chain variable region status, trisomy of chromosome 12 and previously underappreciated drivers, such as response to oxidative stress. In a second application, we used MOFA to analyse single-cell multi-omics data, identifying coordinated transcriptional and epigenetic changes along cell differentiation. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Monakhova, Yulia B; Fareed, Jawed; Yao, Yiming; Diehl, Bernd W K
2018-05-10
Nuclear magnetic resonance (NMR) spectroscopy is regarded as one of the most powerful and versatile analytical approaches to assure the quality of heparin preparations. In particular, it was recently demonstrated that by using 1 H NMR coupled with chemometrics heparin and low molecular weight heparin (LMWH) samples derived from three major animal species (porcine, ovine and bovine) can be differentiated [Y.B. Monakhova et al. J. Pharm. Anal. 149 (2018) 114-119]. In this study, significant improvement of existing chemometric models was achieved by switching to 2D NMR experiments (heteronuclear multiple-quantum correlation (HMQC) and diffusion-ordered spectroscopy (DOSY)). Two representative data sets (sixty-nine heparin and twenty-two LMWH) belonged to different batches and distributed by different commercial companies were investigated. A trend for animal species differentiation was observed in the principal component analysis (PCA) score plot built based on the DOSY data. A superior model was constructed using HMQC experiments, where individual heparin (LMWH) clusters as well as their blends were clearly differentiated. The predictive power of different classification methods as well as unsupervised techniques (independent components analysis, ICA) clearly proved applicability of the model for routine heparin and LMWH analysis. The switch from 1D to 2D NMR techniques provides a wealth of additional information, which is beneficial for multivariate modeling of NMR spectroscopic data for heparin preparations. Copyright © 2018 Elsevier B.V. All rights reserved.
Li, Yan; Zhang, Ji; Zhao, Yanli; Liu, Honggao; Wang, Yuanzhong; Jin, Hang
2016-01-01
In this study the geographical differentiation of dried sclerotia of the medicinal mushroom Wolfiporia extensa, obtained from different regions in Yunnan Province, China, was explored using Fourier-transform infrared (FT-IR) spectroscopy coupled with multivariate data analysis. The FT-IR spectra of 97 samples were obtained for wave numbers ranging from 4000 to 400 cm-1. Then, the fingerprint region of 1800-600 cm-1 of the FT-IR spectrum, rather than the full spectrum, was analyzed. Different pretreatments were applied on the spectra, and a discriminant analysis model based on the Mahalanobis distance was developed to select an optimal pretreatment combination. Two unsupervised pattern recognition procedures- principal component analysis and hierarchical cluster analysis-were applied to enhance the authenticity of discrimination of the specimens. The results showed that excellent classification could be obtained after optimizing spectral pretreatment. The tested samples were successfully discriminated according to their geographical locations. The chemical properties of dried sclerotia of W. extensa were clearly dependent on the mushroom's geographical origins. Furthermore, an interesting finding implied that the elevations of collection areas may have effects on the chemical components of wild W. extensa sclerotia. Overall, this study highlights the feasibility of FT-IR spectroscopy combined with multivariate data analysis in particular for exploring the distinction of different regional W. extensa sclerotia samples. This research could also serve as a basis for the exploitation and utilization of medicinal mushrooms.
Del Coco, Laura; Schena, Francesco Paolo; Fanizzi, Francesco Paolo
2012-01-01
Multivariate analysis of 1H NMR data has been used for the characterization of 12 blended olive oils commercially available in the U.S. as Italian products. Chemometric methods such as unsupervised Principal Component Analysis (PCA) allowed good discrimination and gave some affinity indications for the U.S. market olive oils compared to other single cultivars of extra virgin olive oil such as Coratina and Ogliarola from Apulia, one of Italy’s leading olive oil producers, Picual (Spain), Kalamata (Greece) and Sfax (Tunisia). The olive oils commercially available as Italian products in the U.S. market clustered into 3 groups. Among them only the first (7 samples) and the second group (2 samples) showed PCA ranges similar to European references. Two oils of the third group (3 samples) were more similar to Tunisian references. In conclusion, our study revealed that most EVOO (extra virgin olive oils) tested were closer to Greek (in particular) and Spanish olive oils than Apulia EVOO. The PCA loadings disclose the components responsible for the discrimination as unsaturated (oleic, linoleic, linolenic) and saturated fatty acids. All are of great importance because of their nutritional value and differential effects on the oxidative stability of oils. It is evident that this approach has the potential to reveal the origin of EVOO, although the results support the need for a larger database, including EVOO from other Italian regions. PMID:22690321
Bauml, Joshua; Kim, Jiyoung; Zhang, Xiaochen; Aggarwal, Charu; Cohen, Roger B; Schmitz, Kathryn
2017-08-01
Patients with human papillomavirus (HPV)-related head and neck cancer (HNC) have a better prognosis relative to other types of HNC, making survivorship an emerging and critical issue. Exercise is a core component of survivorship care, but little is known about how many survivors of HPV-related HNC can safely be advised to start exercising on their own, as opposed to needing further evaluation or supervised exercise. We utilized guidelines to identify health issues that would indicate value of further evaluation prior to being safely prescribed unsupervised exercise. We performed a retrospective chart review of 150 patients with HPV-related HNC to assess health issues 6 months after completing definitive therapy. Patients with at least one health issue were deemed appropriate to receive further evaluation prior to prescription for unsupervised exercise. We utilized logistic regression to identify clinical and demographic factors associated with the need for further evaluation, likely performed by outpatient rehabilitation clinicians. In this cohort of patients, 39.3% could safely be prescribed unsupervised exercise 6 months after completing definitive therapy. On multivariable regression, older age, BMI >30, and receipt of radiation were associated with an increased likelihood for requiring further evaluation or supervised exercise. Over half of patients with HPV-related HNC would benefit from referral to physical therapy or an exercise professional for further evaluation to determine the most appropriate level of exercise supervision, based upon current guidelines. Development of such referral systems will be essential to enhance survivorship outcomes for patients who have completed treatment.
An assessment of Landsat MSS and TM data for urban and near-urban land-cover digital classification
NASA Technical Reports Server (NTRS)
Haack, Barry; Bryant, Nevin; Adams, Steven
1987-01-01
The information content of Landsat TM and MSS data was examined to assess the ability to digitally differentiate urban and near-urban land covers around Miami, FL. This examination included comparisons of unsupervised signature extractions for various cover types, training site statistics for intraclass and interclass separability, and band and band combination selection from an 11-band multisensor data set. The principal analytical tool used in this study was transformed divergence calculations. The TM digital data are typically more useful than the MSS data in the homogeneous near-urban land-covers and less useful in the heterogeneous urban areas.
Evaluation of Deep Learning Representations of Spatial Storm Data
NASA Astrophysics Data System (ADS)
Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.
2017-12-01
The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.
Schmidt, Mette L K; Østergren, Peter; Cormie, Prue; Ragle, Anne-Mette; Sønksen, Jens; Midtgaard, Julie
2018-06-21
Regular exercise is recommended to mitigate the adverse effects of androgen deprivation therapy in men with prostate cancer. The purpose of this study was to explore the experience of transition to unsupervised, community-based exercise among men who had participated in a hospital-based supervised exercise programme in order to propose components that supported transition to unsupervised exercise. Participants were selected by means of purposive, criteria-based sampling. Men undergoing androgen deprivation therapy who had completed a 12-week hospital-based, supervised, group exercise intervention were invited to participate. The programme involved aerobic and resistance training using machines and included a structured transition to a community-based fitness centre. Data were collected by means of semi-structured focus group interviews and analysed using thematic analysis. Five focus group interviews were conducted with a total of 29 men, of whom 25 reported to have continued to exercise at community-based facilities. Three thematic categories emerged: Development and practice of new skills; Establishing social relationships; and Familiarising with bodily well-being. These were combined into an overarching theme: From learning to doing. Components suggested to support transition were as follows: a structured transition involving supervised exercise sessions at a community-based facility; strategies to facilitate peer support; transferable tools including an individual exercise chart; and access to 'check-ups' by qualified exercise specialists. Hospital-based, supervised exercise provides a safe learning environment. Transferring to community-based exercise can be experienced as a confrontation with the real world and can be eased through securing a structured transition, having transferable tools, sustained peer support and monitoring.
On the Fallibility of Principal Components in Research
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong
2017-01-01
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate
Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.
2009-01-01
Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786
Unsupervised classification of operator workload from brain signals.
Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin
2016-06-01
In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects' error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.
Unsupervised classification of operator workload from brain signals
NASA Astrophysics Data System (ADS)
Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin
2016-06-01
Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.
Raman spectroscopy of normal oral buccal mucosa tissues: study on intact and incised biopsies
NASA Astrophysics Data System (ADS)
Deshmukh, Atul; Singh, S. P.; Chaturvedi, Pankaj; Krishna, C. Murali
2011-12-01
Oral squamous cell carcinoma is one of among the top 10 malignancies. Optical spectroscopy, including Raman, is being actively pursued as alternative/adjunct for cancer diagnosis. Earlier studies have demonstrated the feasibility of classifying normal, premalignant, and malignant oral ex vivo tissues. Spectral features showed predominance of lipids and proteins in normal and cancer conditions, respectively, which were attributed to membrane lipids and surface proteins. In view of recent developments in deep tissue Raman spectroscopy, we have recorded Raman spectra from superior and inferior surfaces of 10 normal oral tissues on intact, as well as incised, biopsies after separation of epithelium from connective tissue. Spectral variations and similarities among different groups were explored by unsupervised (principal component analysis) and supervised (linear discriminant analysis, factorial discriminant analysis) methodologies. Clusters of spectra from superior and inferior surfaces of intact tissues show a high overlap; whereas spectra from separated epithelium and connective tissue sections yielded clear clusters, though they also overlap on clusters of intact tissues. Spectra of all four groups of normal tissues gave exclusive clusters when tested against malignant spectra. Thus, this study demonstrates that spectra recorded from the superior surface of an intact tissue may have contributions from deeper layers but has no bearing from the classification of a malignant tissues point of view.
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
2014-01-01
Background Non-small cell lung cancer (NSCLC) remains lethal despite the development of numerous drug therapy technologies. About 85% to 90% of lung cancers are NSCLC and the 5-year survival rate is at best still below 50%. Thus, it is important to find drugable target genes for NSCLC to develop an effective therapy for NSCLC. Results Integrated analysis of publically available gene expression and promoter methylation patterns of two highly aggressive NSCLC cell lines generated by in vivo selection was performed. We selected eleven critical genes that may mediate metastasis using recently proposed principal component analysis based unsupervised feature extraction. The eleven selected genes were significantly related to cancer diagnosis. The tertiary protein structure of the selected genes was inferred by Full Automatic Modeling System, a profile-based protein structure inference software, to determine protein functions and to specify genes that could be potential drug targets. Conclusions We identified eleven potentially critical genes that may mediate NSCLC metastasis using bioinformatic analysis of publically available data sets. These genes are potential target genes for the therapy of NSCLC. Among the eleven genes, TINAGL1 and B3GALNT1 are possible candidates for drug compounds that inhibit their gene expression. PMID:25521548
Assessment of Vulnerability to Coccidioidomycosis in Arizona and California.
Shriber, Jennifer; Conlon, Kathryn C; Benedict, Kaitlin; McCotter, Orion Z; Bell, Jesse E
2017-06-23
Coccidioidomycosis is a fungal infection endemic to the southwestern United States, particularly Arizona and California. Its incidence has increased, potentially due in part to the effects of changing climatic variables on fungal growth and spore dissemination. This study aims to quantify the county-level vulnerability to coccidioidomycosis in Arizona and California and to assess the relationships between population vulnerability and climate variability. The variables representing exposure, sensitivity, and adaptive capacity were combined to calculate county level vulnerability indices. Three methods were used: (1) principal components analysis; (2) quartile weighting; and (3) percentile weighting. Two sets of indices, "unsupervised" and "supervised", were created. Each index was correlated with coccidioidomycosis incidence data from 2000-2014. The supervised percentile index had the highest correlation; it was then correlated with variability measures for temperature, precipitation, and drought. The supervised percentile index was significantly correlated ( p < 0.05) with coccidioidomycosis incidence in both states. Moderate, positive significant associations ( p < 0.05) were found between index scores and climate variability when both states were concurrently analyzed and when California was analyzed separately. This research adds to the body of knowledge that could be used to target interventions to vulnerable counties and provides support for the hypothesis that population vulnerability to coccidioidomycosis is associated with climate variability.
Centered Kernel Alignment Enhancing Neural Network Pretraining for MRI-Based Dementia Diagnosis
Cárdenas-Peña, David; Collazos-Huertas, Diego; Castellanos-Dominguez, German
2016-01-01
Dementia is a growing problem that affects elderly people worldwide. More accurate evaluation of dementia diagnosis can help during the medical examination. Several methods for computer-aided dementia diagnosis have been proposed using resonance imaging scans to discriminate between patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and healthy controls (NC). Nonetheless, the computer-aided diagnosis is especially challenging because of the heterogeneous and intermediate nature of MCI. We address the automated dementia diagnosis by introducing a novel supervised pretraining approach that takes advantage of the artificial neural network (ANN) for complex classification tasks. The proposal initializes an ANN based on linear projections to achieve more discriminating spaces. Such projections are estimated by maximizing the centered kernel alignment criterion that assesses the affinity between the resonance imaging data kernel matrix and the label target matrix. As a result, the performed linear embedding allows accounting for features that contribute the most to the MCI class discrimination. We compare the supervised pretraining approach to two unsupervised initialization methods (autoencoders and Principal Component Analysis) and against the best four performing classification methods of the 2014 CADDementia challenge. As a result, our proposal outperforms all the baselines (7% of classification accuracy and area under the receiver-operating-characteristic curve) at the time it reduces the class biasing. PMID:27148392
In vivo Raman spectroscopic identification of premalignant lesions in oral buccal mucosa
NASA Astrophysics Data System (ADS)
Singh, S. P.; Deshmukh, Atul; Chaturvedi, Pankaj; Murali Krishna, C.
2012-10-01
Cancers of oral cavities are one of the most common malignancies in India and other south-Asian countries. Tobacco habits are the main etiological factors for oral cancer. Identification of premalignant lesions is required for improving survival rates related to oral cancer. Optical spectroscopy methods are projected as alternative/adjunct for cancer diagnosis. Earlier studies have demonstrated the feasibility of classifying normal, premalignant, and malignant oral ex-vivo tissues. We intend to evaluate potentials of Raman spectroscopy in detecting premalignant conditions. Spectra were recorded from premalignant patches, contralateral normal (opposite to tumor site), and cancerous sites of subjects with oral cancers and also from age-matched healthy subjects with and without tobacco habits. A total of 861 spectra from 104 subjects were recorded using a fiber-optic probe-coupled HE-785 Raman spectrometer. Spectral differences in the 1200- to 1800-cm-1 region were subjected to unsupervised principal component analysis and supervised linear discriminant analysis followed by validation with leave-one-out and an independent test data set. Results suggest that premalignant conditions can be objectively discriminated with both normal and cancerous sites as well as from healthy controls with and without tobacco habits. Findings of the study further support efficacy of Raman spectroscopic approaches in oral-cancer applications.
Slow feature analysis: unsupervised learning of invariances.
Wiskott, Laurenz; Sejnowski, Terrence J
2002-04-01
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Gaye, M. M.; Valentine, S. J.; Hu, Y.; Mirjankar, N.; Hammoud, Z. T.; Mechref, Y.; Lavine, B. K.; Clemmer, D. E.
2012-01-01
Three disease phenotypes, Barrett’s esophagus (BE), high-grade dysplasia (HGD), esophageal adenocarcinoma (EAC), and a set of normal control (NC) serum samples are examined using a combination of ion mobility spectrometry (IMS), mass spectrometry (MS) and principal component analysis (PCA) techniques. Samples from a total of 136 individuals were examined, including: 7 characterized as BE, 12 as HGD, 56 as EAC and 61 as NC. In typical datasets it was possible to assign ~20 to 30 glycan ions based on MS measurements. Ion mobility distributions for these ions show multiple features. In some cases, such as the [S1H5N4+3Na]3+ and [S1F1H5N4+3Na]3+ glycan ions, the ratio of intensities of high-mobility features to low-mobility features vary significantly for different groups. The degree to which such variations in mobility profiles can be used to distinguish phenotypes is evaluated for eleven N-linked glycan ions. An outlier analysis on each sample class followed by an unsupervised PCA using a genetic algorithm for pattern recognition reveals that EAC samples are separated from NC samples based on 46 features originating from the 11-glycan composite IMS distribution. PMID:23126309
Ho, Ying Swan; Yip, Lian Yee; Basri, Nurhidayah; Chong, Vivian Su Hui; Teo, Chin Chye; Tan, Eddy; Lim, Kah Ling; Tan, Gek San; Yang, Xulei; Yeo, Si Yong; Koh, Mariko Si Yue; Devanand, Anantham; Takano, Angela; Tan, Eng Huat; Tan, Daniel Shao Weng; Lim, Tony Kiat Hon
2016-10-14
Cytology and histology forms the cornerstone for the diagnosis of non-small cell lung cancer (NSCLC) but obtaining sufficient tumour cells or tissue biopsies for these tests remains a challenge. We investigate the lipidome of lung pleural effusion (PE) for unique metabolic signatures to discriminate benign versus malignant PE and EGFR versus non-EGFR malignant subgroups to identify novel diagnostic markers that is independent of tumour cell availability. Using liquid chromatography mass spectrometry, we profiled the lipidomes of the PE of 30 benign and 41 malignant cases with or without EGFR mutation. Unsupervised principal component analysis revealed distinctive differences between the lipidomes of benign and malignant PE as well as between EGFR mutants and non-EGFR mutants. Docosapentaenoic acid and Docosahexaenoic acid gave superior sensitivity and specificity for detecting NSCLC when used singly. Additionally, several 20- and 22- carbon polyunsaturated fatty acids and phospholipid species were significantly elevated in the EGFR mutants compared to non-EGFR mutants. A 7-lipid panel showed great promise in the stratification of EGFR from non-EGFR malignant PE. Our data revealed novel lipid candidate markers in the non-cellular fraction of PE that holds potential to aid the diagnosis of benign, EGFR mutation positive and negative NSCLC.
Asiimwe, Stephen; Oloya, James; Song, Xiao; Whalen, Christopher C
2014-12-01
Unsupervised HIV self-testing (HST) has potential to increase knowledge of HIV status; however, its accuracy is unknown. To estimate the accuracy of unsupervised HST in field settings in Uganda, we performed a non-blinded, randomized controlled, non-inferiority trial of unsupervised compared with supervised HST among selected high HIV risk fisherfolk (22.1 % HIV Prevalence) in three fishing villages in Uganda between July and September 2013. The study enrolled 246 participants and randomized them in a 1:1 ratio to unsupervised HST or provider-supervised HST. In an intent-to-treat analysis, the HST sensitivity was 90 % in the unsupervised arm and 100 % among the provider-supervised, yielding a difference 0f -10 % (90 % CI -21, 1 %); non-inferiority was not shown. In a per protocol analysis, the difference in sensitivity was -5.6 % (90 % CI -14.4, 3.3 %) and did show non-inferiority. We conclude that unsupervised HST is feasible in rural Africa and may be non-inferior to provider-supervised HST.
Maurer, Christian; Federolf, Peter; von Tscharner, Vinzenz; Stirling, Lisa; Nigg, Benno M
2012-05-01
Changes in gait kinematics have often been analyzed using pattern recognition methods such as principal component analysis (PCA). It is usually just the first few principal components that are analyzed, because they describe the main variability within a dataset and thus represent the main movement patterns. However, while subtle changes in gait pattern (for instance, due to different footwear) may not change main movement patterns, they may affect movements represented by higher principal components. This study was designed to test two hypotheses: (1) speed and gender differences can be observed in the first principal components, and (2) small interventions such as changing footwear change the gait characteristics of higher principal components. Kinematic changes due to different running conditions (speed - 3.1m/s and 4.9 m/s, gender, and footwear - control shoe and adidas MicroBounce shoe) were investigated by applying PCA and support vector machine (SVM) to a full-body reflective marker setup. Differences in speed changed the basic movement pattern, as was reflected by a change in the time-dependent coefficient derived from the first principal. Gender was differentiated by using the time-dependent coefficient derived from intermediate principal components. (Intermediate principal components are characterized by limb rotations of the thigh and shank.) Different shoe conditions were identified in higher principal components. This study showed that different interventions can be analyzed using a full-body kinematic approach. Within the well-defined vector space spanned by the data of all subjects, higher principal components should also be considered because these components show the differences that result from small interventions such as footwear changes. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Unsupervised segmentation with dynamical units.
Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R
2008-01-01
In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.
NASA Astrophysics Data System (ADS)
Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi
2013-02-01
A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores
ERIC Educational Resources Information Center
Velicer, Wayne F.
1976-01-01
Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)
The Butterflies of Principal Components: A Case of Ultrafine-Grained Polyphase Units
NASA Astrophysics Data System (ADS)
Rietmeijer, F. J. M.
1996-03-01
Dusts in the accretion regions of chondritic interplanetary dust particles [IDPs] consisted of three principal components: carbonaceous units [CUs], carbon-bearing chondritic units [GUs] and carbon-free silicate units [PUs]. Among others, differences among chondritic IDP morphologies and variable bulk C/Si ratios reflect variable mixtures of principal components. The spherical shapes of the initially amorphous principal components remain visible in many chondritic porous IDPs but fusion was documented for CUs, GUs and PUs. The PUs occur as coarse- and ultrafine-grained units that include so called GEMS. Spherical principal components preserved in an IDP as recognisable textural units have unique proporties with important implications for their petrological evolution from pre-accretion processing to protoplanet alteration and dynamic pyrometamorphism. Throughout their lifetime the units behaved as closed-systems without chemical exchange with other units. This behaviour is reflected in their mineralogies while the bulk compositions of principal components define the environments wherein they were formed.
Lu, Ake Tzu-Hui; Austin, Erin; Bonner, Ashley; Huang, Hsin-Hsiung; Cantor, Rita M
2014-09-01
Machine learning methods (MLMs), designed to develop models using high-dimensional predictors, have been used to analyze genome-wide genetic and genomic data to predict risks for complex traits. We summarize the results from six contributions to our Genetic Analysis Workshop 18 working group; these investigators applied MLMs and data mining to analyses of rare and common genetic variants measured in pedigrees. To develop risk profiles, group members analyzed blood pressure traits along with single-nucleotide polymorphisms and rare variant genotypes derived from sequence and imputation analyses in large Mexican American pedigrees. Supervised MLMs included penalized regression with varying penalties, support vector machines, and permanental classification. Unsupervised MLMs included sparse principal components analysis and sparse graphical models. Entropy-based components analyses were also used to mine these data. None of the investigators fully capitalized on the genetic information provided by the complete pedigrees. Their approaches either corrected for the nonindependence of the individuals within the pedigrees or analyzed only those who were independent. Some methods allowed for covariate adjustment, whereas others did not. We evaluated these methods using a variety of metrics. Four contributors conducted primary analyses on the real data, and the other two research groups used the simulated data with and without knowledge of the underlying simulation model. One group used the answers to the simulated data to assess power and type I errors. Although the MLMs applied were substantially different, each research group concluded that MLMs have advantages over standard statistical approaches with these high-dimensional data. © 2014 WILEY PERIODICALS, INC.
Foch, Eric; Milner, Clare E
2014-01-03
Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.
Basati, Zahra; Jamshidi, Bahareh; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef
2018-05-30
The presence of sunn pest-damaged grains in wheat mass reduces the quality of flour and bread produced from it. Therefore, it is essential to assess the quality of the samples in collecting and storage centers of wheat and flour mills. In this research, the capability of visible/near-infrared (Vis/NIR) spectroscopy combined with pattern recognition methods was investigated for discrimination of wheat samples with different percentages of sunn pest-damaged. To this end, various samples belonging to five classes (healthy and 5%, 10%, 15% and 20% unhealthy) were analyzed using Vis/NIR spectroscopy (wavelength range of 350-1000 nm) based on both supervised and unsupervised pattern recognition methods. Principal component analysis (PCA) and hierarchical cluster analysis (HCA) as the unsupervised techniques and soft independent modeling of class analogies (SIMCA) and partial least squares-discriminant analysis (PLS-DA) as supervised methods were used. The results showed that Vis/NIR spectra of healthy samples were correctly clustered using both PCA and HCA. Due to the high overlapping between the four unhealthy classes (5%, 10%, 15% and 20%), it was not possible to discriminate all the unhealthy samples in individual classes. However, when considering only the two main categories of healthy and unhealthy, an acceptable degree of separation between the classes can be obtained after classification with supervised pattern recognition methods of SIMCA and PLS-DA. SIMCA based on PCA modeling correctly classified samples in two classes of healthy and unhealthy with classification accuracy of 100%. Moreover, the power of the wavelengths of 839 nm, 918 nm and 995 nm were more than other wavelengths to discriminate two classes of healthy and unhealthy. It was also concluded that PLS-DA provides excellent classification results of healthy and unhealthy samples (R 2 = 0.973 and RMSECV = 0.057). Therefore, Vis/NIR spectroscopy based on pattern recognition techniques can be useful for rapid distinguishing the healthy wheat samples from those damaged by sunn pest in the maintenance and processing centers. Copyright © 2018 Elsevier B.V. All rights reserved.
Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel; Chew, Peter A.
This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembledmore » the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.« less
Katwal, Santosh B; Gore, John C; Marois, Rene; Rogers, Baxter P
2013-09-01
We present novel graph-based visualizations of self-organizing maps for unsupervised functional magnetic resonance imaging (fMRI) analysis. A self-organizing map is an artificial neural network model that transforms high-dimensional data into a low-dimensional (often a 2-D) map using unsupervised learning. However, a postprocessing scheme is necessary to correctly interpret similarity between neighboring node prototypes (feature vectors) on the output map and delineate clusters and features of interest in the data. In this paper, we used graph-based visualizations to capture fMRI data features based upon 1) the distribution of data across the receptive fields of the prototypes (density-based connectivity); and 2) temporal similarities (correlations) between the prototypes (correlation-based connectivity). We applied this approach to identify task-related brain areas in an fMRI reaction time experiment involving a visuo-manual response task, and we correlated the time-to-peak of the fMRI responses in these areas with reaction time. Visualization of self-organizing maps outperformed independent component analysis and voxelwise univariate linear regression analysis in identifying and classifying relevant brain regions. We conclude that the graph-based visualizations of self-organizing maps help in advanced visualization of cluster boundaries in fMRI data enabling the separation of regions with small differences in the timings of their brain responses.
Unsupervised laparoscopic appendicectomy by surgical trainees is safe and time-effective.
Wong, Kenneth; Duncan, Tristram; Pearson, Andrew
2007-07-01
Open appendicectomy is the traditional standard treatment for appendicitis. Laparoscopic appendicectomy is perceived as a procedure with greater potential for complications and longer operative times. This paper examines the hypothesis that unsupervised laparoscopic appendicectomy by surgical trainees is a safe and time-effective valid alternative. Medical records, operating theatre records and histopathology reports of all patients undergoing laparoscopic and open appendicectomy over a 15-month period in two hospitals within an area health service were retrospectively reviewed. Data were analysed to compare patient features, pathology findings, operative times, complications, readmissions and mortality between laparoscopic and open groups and between unsupervised surgical trainee operators versus consultant surgeon operators. A total of 143 laparoscopic and 222 open appendicectomies were reviewed. Unsupervised trainees performed 64% of the laparoscopic appendicectomies and 55% of the open appendicectomies. There were no significant differences in complication rates, readmissions, mortality and length of stay between laparoscopic and open appendicectomy groups or between trainee and consultant surgeon operators. Conversion rates (laparoscopic to open approach) were similar for trainees and consultants. Unsupervised senior surgical trainees did not take significantly longer to perform laparoscopic appendicectomy when compared to unsupervised trainee-performed open appendicectomy. Unsupervised laparoscopic appendicectomy by surgical trainees is safe and time-effective.
1992-12-23
predominance of structural models of recognition, of which a recent example is the Recognition By Components (RBC) theory ( Biederman , 1987 ). Structural...related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived from a biologically motivated computational theory (Bienenstock et...dimensional object recognition (Intrator and Gold, 1991). The method is related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...
2017-05-23
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
Unsupervised Categorization in a Sample of Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Edwards, Darren J.; Perlman, Amotz; Reed, Phil
2012-01-01
Studies of supervised Categorization have demonstrated limited Categorization performance in participants with autism spectrum disorders (ASD), however little research has been conducted regarding unsupervised Categorization in this population. This study explored unsupervised Categorization using two stimulus sets that differed in their…
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
Nonlinear Principal Components Analysis: Introduction and Application
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.
2007-01-01
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…
USDA-ARS?s Scientific Manuscript database
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
Similarities between principal components of protein dynamics and random diffusion
NASA Astrophysics Data System (ADS)
Hess, Berk
2000-12-01
Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Tian, Moqian; Grill-Spector, Kalanit
2015-01-01
Recognizing objects is difficult because it requires both linking views of an object that can be different and distinguishing objects with similar appearance. Interestingly, people can learn to recognize objects across views in an unsupervised way, without feedback, just from the natural viewing statistics. However, there is intense debate regarding what information during unsupervised learning is used to link among object views. Specifically, researchers argue whether temporal proximity, motion, or spatiotemporal continuity among object views during unsupervised learning is beneficial. Here, we untangled the role of each of these factors in unsupervised learning of novel three-dimensional (3-D) objects. We found that after unsupervised training with 24 object views spanning a 180° view space, participants showed significant improvement in their ability to recognize 3-D objects across rotation. Surprisingly, there was no advantage to unsupervised learning with spatiotemporal continuity or motion information than training with temporal proximity. However, we discovered that when participants were trained with just a third of the views spanning the same view space, unsupervised learning via spatiotemporal continuity yielded significantly better recognition performance on novel views than learning via temporal proximity. These results suggest that while it is possible to obtain view-invariant recognition just from observing many views of an object presented in temporal proximity, spatiotemporal information enhances performance by producing representations with broader view tuning than learning via temporal association. Our findings have important implications for theories of object recognition and for the development of computational algorithms that learn from examples. PMID:26024454
Pothos, Emmanuel M; Bailey, Todd M
2009-07-01
Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM-the authors refer to this way of applying the GCM as "unsupervised GCM." The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness.
An Introductory Application of Principal Components to Cricket Data
ERIC Educational Resources Information Center
Manage, Ananda B. W.; Scariano, Stephen M.
2013-01-01
Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
Identifying apple surface defects using principal components analysis and artifical neural networks
USDA-ARS?s Scientific Manuscript database
Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...
Twellmann, Thorsten; Meyer-Baese, Anke; Lange, Oliver; Foo, Simon; Nattkemper, Tim W.
2008-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has become an important tool in breast cancer diagnosis, but evaluation of multitemporal 3D image data holds new challenges for human observers. To aid the image analysis process, we apply supervised and unsupervised pattern recognition techniques for computing enhanced visualizations of suspicious lesions in breast MRI data. These techniques represent an important component of future sophisticated computer-aided diagnosis (CAD) systems and support the visual exploration of spatial and temporal features of DCE-MRI data stemming from patients with confirmed lesion diagnosis. By taking into account the heterogeneity of cancerous tissue, these techniques reveal signals with malignant, benign and normal kinetics. They also provide a regional subclassification of pathological breast tissue, which is the basis for pseudo-color presentations of the image data. Intelligent medical systems are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging. PMID:19255616
Multi-temporal MRI carpal bone volumes analysis by principal axes registration
NASA Astrophysics Data System (ADS)
Ferretti, Roberta; Dellepiane, Silvana
2016-03-01
In this paper, a principal axes registration technique is presented, with the relevant application to segmented volumes. The purpose of the proposed registration is to compare multi-temporal volumes of carpal bones from Magnetic Resonance Imaging (MRI) acquisitions. Starting from the study of the second-order moment matrix, the eigenvectors are calculated to allow the rotation of volumes with respect to reference axes. Then the volumes are spatially translated to become perfectly overlapped. A quantitative evaluation of the results obtained is carried out by computing classical indices from the confusion matrix, which depict similarity measures between the volumes of the same organ as extracted from MRI acquisitions executed at different moments. Within the medical field, the way a registration can be used to compare multi-temporal images is of great interest, since it provides the physician with a tool which allows a visual monitoring of a disease evolution. The segmentation method used herein is based on the graph theory and is a robust, unsupervised and parameters independent method. Patients affected by rheumatic diseases have been considered.
Finding Planets in K2: A New Method of Cleaning the Data
NASA Astrophysics Data System (ADS)
Currie, Miles; Mullally, Fergal; Thompson, Susan E.
2017-01-01
We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.
NASA Astrophysics Data System (ADS)
Langer, H. K.; Falsaperla, S. M.; Behncke, B.; Messina, A.; Spampinato, S.
2009-12-01
Artificial Intelligence (AI) has found broad applications in volcano observatories worldwide with the aim of reducing volcanic hazard. The need to process larger and larger quantity of data makes indeed AI techniques appealing for monitoring purposes. Tools based on Artificial Neural Networks and Support Vector Machine have proved to be particularly successful in the classification of seismic events and volcanic tremor changes heralding eruptive activity, such as paroxysmal explosions and lava fountaining at Stromboli and Mt Etna, Italy (e.g., Falsaperla et al., 1996; Langer et al., 2009). Moving on from the excellent results obtained from these applications, we present KKAnalysis, a MATLAB based software which combines several unsupervised pattern classification methods, exploiting routines of the SOM Toolbox 2 for MATLAB (http://www.cis.hut.fi/projects/somtoolbox). KKAnalysis is based on Self Organizing Maps (SOM) and clustering methods consisting of K-Means, Fuzzy C-Means, and a scheme based on a metrics accounting for correlation between components of the feature vector. We show examples of applications of this tool to volcanic tremor data recorded at Mt Etna between 2007 and 2009. This time span - during which Strombolian explosions, 7 episodes of lava fountaining and effusive activity occurred - is particularly interesting, as it encompassed different states of volcanic activity (i.e., non-eruptive, eruptive according to different styles) for the unsupervised classifier to identify, highlighting their development in time. Even subtle changes in the signal characteristics allow the unsupervised classifier to recognize features belonging to the different classes and stages of volcanic activity. A convenient color-code representation shows up the temporal development of the different classes of signal, making this method extremely helpful for monitoring purposes and surveillance. Though being developed for volcanic tremor classification, KKAnalysis is generally applicable to any type of physical or chemical pattern, provided that feature vectors are given in numerical form. References: Falsaperla, S., S. Graziani, G. Nunnari, and S. Spampinato (1996). Automatic classification of volcanic earthquakes by using multy-layered neural networks. Natural Hazard, 13, 205-228. Langer, H., S. Falsaperla, M. Masotti, R. Campanini, S. Spampinato, and A. Messina (2008). Synopsis of supervised and unsupervised pattern classification techniques applied to volcanic tremor data at Mt Etna, Italy. Geophys. J. Int., doi:10.1111/j.1365-246X.2009.04179.x.
Recognizing patterns of visual field loss using unsupervised machine learning
NASA Astrophysics Data System (ADS)
Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher
2014-03-01
Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.
Fossil Signatures Using Elemental Abundance Distributions and Bayesian Probabilistic Classification
NASA Technical Reports Server (NTRS)
Hoover, Richard B.; Storrie-Lombardi, Michael C.
2004-01-01
Elemental abundances (C6, N7, O8, Na11, Mg12, Al3, P15, S16, Cl17, K19, Ca20, Ti22, Mn25, Fe26, and Ni28) were obtained for a set of terrestrial fossils and the rock matrix surrounding them. Principal Component Analysis extracted five factors accounting for the 92.5% of the data variance, i.e. information content, of the elemental abundance data. Hierarchical Cluster Analysis provided unsupervised sample classification distinguishing fossil from matrix samples on the basis of either raw abundances or PCA input that agreed strongly with visual classification. A stochastic, non-linear Artificial Neural Network produced a Bayesian probability of correct sample classification. The results provide a quantitative probabilistic methodology for discriminating terrestrial fossils from the surrounding rock matrix using chemical information. To demonstrate the applicability of these techniques to the assessment of meteoritic samples or in situ extraterrestrial exploration, we present preliminary data on samples of the Orgueil meteorite. In both systems an elemental signature produces target classification decisions remarkably consistent with morphological classification by a human expert using only structural (visual) information. We discuss the possibility of implementing a complexity analysis metric capable of automating certain image analysis and pattern recognition abilities of the human eye using low magnification optical microscopy images and discuss the extension of this technique across multiple scales.
Casu, Fabio; Watson, Aaron M; Yost, Justin; Leffler, John W; Gaylord, Thomas Gibson; Barrows, Frederic T; Sandifer, Paul A; Denson, Michael R; Bearden, Daniel W
2017-07-07
We investigated the metabolic effects of four different commercial soy-based protein products on red drum fish (Sciaenops ocellatus) using nuclear magnetic resonance (NMR) spectroscopy-based metabolomics along with unsupervised principal component analysis (PCA) to evaluate metabolic profiles in liver, muscle, and plasma tissues. Specifically, during a 12 week feeding trial, juvenile red drum maintained in an indoor recirculating aquaculture system were fed four different commercially available soy formulations, containing the same amount of crude protein, and two reference diets as performance controls: a 60% soybean meal diet that had been used in a previous trial in our lab and a natural diet. Red drum liver, muscle, and plasma tissues were sampled at multiple time points to provide a more accurate snapshot of specific metabolic states during the grow-out. PCA score plots derived from NMR spectroscopy data sets showed significant differences between fish fed the natural diet and the soy-based diets, in both liver and muscle tissues. While red drum tolerated the inclusion of soy with good feed conversion ratios, a comparison to fish fed the natural diet revealed that the soy-fed fish in this study displayed a distinct metabolic signature characterized by increased protein and lipid catabolism, suggesting an energetic imbalance. Furthermore, among the soy-based formulations, one diet showed a more pronounced catabolic signature.
Deep learning algorithms for detecting explosive hazards in ground penetrating radar data
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2014-05-01
Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.
NASA Astrophysics Data System (ADS)
Li, Dong; Tang, Cheng; Xia, Chunlei; Zhang, Hua
2017-02-01
Artificial reefs (ARs) are effective means to maintain fishery resources and to restore ecological environment in coastal waters. ARs have been widely constructed along the Chinese coast. However, understanding of benthic habitats in the vicinity of ARs is limited, hindering effective fisheries and aquacultural management. Multibeam echosounder (MBES) is an advanced acoustic instrument capable of efficiently generating large-scale maps of benthic environments at fine resolutions. The objective of this study is to develop a technical approach to characterize, classify, and map shallow coastal areas with ARs using an MBES. An automated classification method is designed and tested to process bathymetric and backscatter data from MBES and transform the variables into simple, easily visualized maps. To reduce the redundancy in acoustic variables, a principal component analysis (PCA) is used to condense the highly collinear dataset. An acoustic benthic map of bottom sediments is classified using an iterative self-organizing data analysis technique (ISODATA). The approach is tested with MBES surveys in a 1.15 km2 fish farm with a high density of ARs off the Yantai coast in northern China. Using this method, 3 basic benthic habitats (sandy bottom, muddy sediments, and ARs) are distinguished. The results of the classification are validated using sediment samples and underwater surveys. Our study shows that the use of MBES is an effective method for acoustic mapping and classification of ARs.
Static vs. dynamic decoding algorithms in a non-invasive body-machine interface
Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B.; Abdollahi, Farnaz; Pedersen, Jessica; Mussa-Ivaldi, Ferdinando A.
2017-01-01
In this study, we consider a non-invasive body-machine interface that captures body motions still available to people with spinal cord injury (SCI) and maps them into a set of signals for controlling a computer user interface while engaging in a sustained level of mobility and exercise. We compare the effectiveness of two decoding algorithms that transform a high-dimensional body-signal vector into a lower dimensional control vector on 6 subjects with high-level SCI and 8 controls. One algorithm is based on a static map from current body signals to the current value of the control vector set through principal component analysis (PCA), the other on dynamic mapping a segment of body signals to the value and the temporal derivatives of the control vector set through a Kalman filter. SCI and control participants performed straighter and smoother cursor movements with the Kalman algorithm during center-out reaching, but their movements were faster and more precise when using PCA. All participants were able to use the BMI’s continuous, two-dimensional control to type on a virtual keyboard and play pong, and performance with both algorithms was comparable. However, seven of eight control participants preferred PCA as their method of virtual wheelchair control. The unsupervised PCA algorithm was easier to train and seemed sufficient to achieve a higher degree of learnability and perceived ease of use. PMID:28092564
SFRP1 is a possible candidate for epigenetic therapy in non-small cell lung cancer.
Taguchi, Y-H; Iwadate, Mitsuo; Umeyama, Hideaki
2016-08-12
Non-small cell lung cancer (NSCLC) remains a lethal disease despite many proposed treatments. Recent studies have indicated that epigenetic therapy, which targets epigenetic effects, might be a new therapeutic methodology for NSCLC. However, it is not clear which objects (e.g., genes) this treatment specifically targets. Secreted frizzled-related proteins (SFRPs) are promising candidates for epigenetic therapy in many cancers, but there have been no reports of SFRPs targeted by epigenetic therapy for NSCLC. This study performed a meta-analysis of reprogrammed NSCLC cell lines instead of the direct examination of epigenetic therapy treatment to identify epigenetic therapy targets. In addition, mRNA expression/promoter methylation profiles were processed by recently proposed principal component analysis based unsupervised feature extraction and categorical regression analysis based feature extraction. The Wnt/β-catenin signalling pathway was extensively enriched among 32 genes identified by feature extraction. Among the genes identified, SFRP1 was specifically indicated to target β-catenin, and thus might be targeted by epigenetic therapy in NSCLC cell lines. A histone deacetylase inhibitor might reactivate SFRP1 based upon the re-analysis of a public domain data set. Numerical computation validated the binding of SFRP1 to WNT1 to suppress Wnt signalling pathway activation in NSCLC. The meta-analysis of reprogrammed NSCLC cell lines identified SFRP1 as a promising target of epigenetic therapy for NSCLC.
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
NASA Astrophysics Data System (ADS)
Jirásko, Robert; Holčapek, Michal; Khalikova, Maria; Vrána, David; Študent, Vladimír; Prouzová, Zuzana; Melichar, Bohuslav
2017-08-01
Matrix-assisted laser desorption/ionization coupled with Orbitrap mass spectrometry (MALDI-Orbitrap-MS) is used for the clinical study of patients with renal cell carcinoma (RCC), as the most common type of kidney cancer. Significant changes in sulfoglycosphingolipid abundances between tumor and autologous normal kidney tissues are observed. First, sulfoglycosphingolipid species in studied RCC samples are identified using high mass accuracy full scan and tandem mass spectra. Subsequently, optimization, method validation, and statistical evaluation of MALDI-MS data for 158 tissues of 80 patients are discussed. More than 120 sulfoglycosphingolipids containing one to five hexosyl units are identified in human RCC samples based on the systematic study of their fragmentation behavior. Many of them are recorded here for the first time. Multivariate data analysis (MDA) methods, i.e., unsupervised principal component analysis (PCA) and supervised orthogonal partial least square discriminant analysis (OPLS-DA), are used for the visualization of differences between normal and tumor samples to reveal the most up- and downregulated lipids in tumor tissues. Obtained results are closely correlated with MALDI mass spectrometry imaging (MSI) and histologic staining. Important steps of the present MALDI-Orbitrap-MS approach are also discussed, such as the selection of best matrix, correct normalization, validation for semiquantitative study, and problems with possible isobaric interferences on closed masses in full scan mass spectra.
Building and using a statistical 3D motion atlas for analyzing myocardial contraction in MRI
NASA Astrophysics Data System (ADS)
Rougon, Nicolas F.; Petitjean, Caroline; Preteux, Francoise J.
2004-05-01
We address the issue of modeling and quantifying myocardial contraction from 4D MR sequences, and present an unsupervised approach for building and using a statistical 3D motion atlas for the normal heart. This approach relies on a state-of-the-art variational non rigid registration (NRR) technique using generalized information measures, which allows for robust intra-subject motion estimation and inter-subject anatomical alignment. The atlas is built from a collection of jointly acquired tagged and cine MR exams in short- and long-axis views. Subject-specific non parametric motion estimates are first obtained by incremental NRR of tagged images onto the end-diastolic (ED) frame. Individual motion data are then transformed into the coordinate system of a reference subject using subject-to-reference mappings derived by NRR of cine ED images. Finally, principal component analysis of aligned motion data is performed for each cardiac phase, yielding a mean model and a set of eigenfields encoding kinematic ariability. The latter define an organ-dedicated hierarchical motion basis which enables parametric motion measurement from arbitrary tagged MR exams. To this end, the atlas is transformed into subject coordinates by reference-to-subject NRR of ED cine frames. Atlas-based motion estimation is then achieved by parametric NRR of tagged images onto the ED frame, yielding a compact description of myocardial contraction during diastole.
MRE11 and RAD50, but not NBS1, are essential for gene targeting in the moss Physcomitrella patens.
Kamisugi, Yasuko; Schaefer, Didier G; Kozak, Jaroslav; Charlot, Florence; Vrielynck, Nathalie; Holá, Marcela; Angelis, Karel J; Cuming, Andrew C; Nogué, Fabien
2012-04-01
The moss Physcomitrella patens is unique among plant models for the high frequency with which targeted transgene insertion occurs via homologous recombination. Transgene integration is believed to utilize existing machinery for the detection and repair of DNA double-strand breaks (DSBs). We undertook targeted knockout of the Physcomitrella genes encoding components of the principal sensor of DNA DSBs, the MRN complex. Loss of function of PpMRE11 or PpRAD50 strongly and specifically inhibited gene targeting, whilst rates of untargeted transgene integration were relatively unaffected. In contrast, disruption of the PpNBS1 gene retained the wild-type capacity to integrate transforming DNA efficiently at homologous loci. Analysis of the kinetics of DNA-DSB repair in wild-type and mutant plants by single-nucleus agarose gel electrophoresis revealed that bleomycin-induced fragmentation of genomic DNA was repaired at approximately equal rates in each genotype, although both the Ppmre11 and Pprad50 mutants exhibited severely restricted growth and development and enhanced sensitivity to UV-B and bleomycin-induced DNA damage, compared with wild-type and Ppnbs1 plants. This implies that while extensive DNA repair can occur in the absence of a functional MRN complex; this is unsupervised in nature and results in the accumulation of deleterious mutations incompatible with normal growth and development.
MRE11 and RAD50, but not NBS1, are essential for gene targeting in the moss Physcomitrella patens
Kamisugi, Yasuko; Schaefer, Didier G.; Kozak, Jaroslav; Charlot, Florence; Vrielynck, Nathalie; Holá, Marcela; Angelis, Karel J.; Cuming, Andrew C.; Nogué, Fabien
2012-01-01
The moss Physcomitrella patens is unique among plant models for the high frequency with which targeted transgene insertion occurs via homologous recombination. Transgene integration is believed to utilize existing machinery for the detection and repair of DNA double-strand breaks (DSBs). We undertook targeted knockout of the Physcomitrella genes encoding components of the principal sensor of DNA DSBs, the MRN complex. Loss of function of PpMRE11 or PpRAD50 strongly and specifically inhibited gene targeting, whilst rates of untargeted transgene integration were relatively unaffected. In contrast, disruption of the PpNBS1 gene retained the wild-type capacity to integrate transforming DNA efficiently at homologous loci. Analysis of the kinetics of DNA-DSB repair in wild-type and mutant plants by single-nucleus agarose gel electrophoresis revealed that bleomycin-induced fragmentation of genomic DNA was repaired at approximately equal rates in each genotype, although both the Ppmre11 and Pprad50 mutants exhibited severely restricted growth and development and enhanced sensitivity to UV-B and bleomycin-induced DNA damage, compared with wild-type and Ppnbs1 plants. This implies that while extensive DNA repair can occur in the absence of a functional MRN complex; this is unsupervised in nature and results in the accumulation of deleterious mutations incompatible with normal growth and development. PMID:22210882
Canopy reflectance modeling in a tropical wooded grassland
NASA Technical Reports Server (NTRS)
Simonett, David
1988-01-01
The Li-Strahler canopy reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density in two bioclimatic zones in Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple patameters measured in the field and in the imagery. Reflectance properties of the trees were measured in the study sites using a pole-mounted radiometer. The measurements showed that the assumptions of the simple Li-Strahler model are reasonable for these woodlands. The field radiometer measurements were used to calculate the normalized difference vegetation index (NDVI), and the integrated NDVI over the canopy was related to crown volume. Predictions of tree size and density from the canopy model were used with allometric equations from the literature to estimate woody biomass and potential foliar biomass for the sites and for the regions. Estimates were compared with independent measurements made in the Sahelian sites, and to typical values from the literature for these regions and for similar woodlands. In order to apply the inversion procedure regionally, an area must first be stratified into woodland cover classes, and dry-season TM data were used to generate a stratum map of the study areas with reasonable accuracy. The method used was unsupervised classification of multi-data principal components images.
Na, Kyoung-Sae; Lee, Soyoung Irene; Hong, Hyun Ju; Oh, Myoung-Ja; Bahn, Geon Ho; Ha, Kyunghee; Shin, Yun Mi; Song, Jungeun; Park, Eun Jin; Yoo, Heejung; Kim, Hyunsoo; Kyung, Yun-Mi
2014-06-01
In the last few decades, changing socioeconomic and family structures have increasingly left children alone without adult supervision. Carefully prepared and limited periods of unsupervised time are not harmful for children. However, long unsupervised periods have harmful effects, particularly for those children at high risk for inattention and problem behaviors. In this study, we examined the influence of unsupervised time on behavior problems by studying a sample of elementary school children at high risk for inattention and problem behaviors. The study analyzed data from the Children's Mental Health Promotion Project, which was conducted in collaboration with education, government, and mental health professionals. The child behavior checklist (CBCL) was administered to assess problem behaviors among first- and fourth-grade children. Multivariate logistic regression analysis was used to evaluate the influence of unsupervised time on children's behavior. A total of 3,270 elementary school children (1,340 first-graders and 1,930 fourth-graders) were available for this study; 1,876 of the 3,270 children (57.4%) reportedly spent a significant amount of time unsupervised during the day. Unsupervised time that exceeded more than 2h per day increased the risk of delinquency, aggressive behaviors, and somatic complaints, as well as externalizing and internalizing problems. Carefully planned afterschool programming and care should be provided to children at high risk for inattention and problem behaviors. Also, a more comprehensive approach is needed to identify the possible mechanisms by which unsupervised time aggravates behavior problems in children predisposed for these behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directly reconstructing principal components of heterogeneous particles from cryo-EM images.
Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali
2015-08-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule... management plan. (c) Operator training and qualification. (d) Emission limitations and operating limits. (e...
40 CFR 60.2570 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components of... (k) of this section. (a) Increments of progress toward compliance. (b) Waste management plan. (c...
Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin
2014-06-01
Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.
NASA Astrophysics Data System (ADS)
Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin
2014-06-01
Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.
Insight into the Peopling of Mainland Southeast Asia from Thai Population Genetic Structure
Chaichoompu, Kridsadakorn; Ngamphiw, Chumpol; Assawamakin, Anunchai; Nuinoon, Manit; Sripichai, Orapan; Svasti, Saovaros; Fucharoen, Suthat; Praphanphoj, Verayuth; Tongsima, Sissades
2013-01-01
There is considerable ethno-linguistic and genetic variation among human populations in Asia, although tracing the origins of this diversity is complicated by migration events. Thailand is at the center of Mainland Southeast Asia (MSEA), a region within Asia that has not been extensively studied. Genetic substructure may exist in the Thai population, since waves of migration from southern China throughout its recent history may have contributed to substantial gene flow. Autosomal SNP data were collated for 438,503 markers from 992 Thai individuals. Using the available self-reported regional origin, four Thai subpopulations genetically distinct from each other and from other Asian populations were resolved by Neighbor-Joining analysis using a 41,569 marker subset. Using an independent Principal Components-based unsupervised clustering approach, four major MSEA subpopulations were resolved in which regional bias was apparent. A major ancestry component was common to these MSEA subpopulations and distinguishes them from other Asian subpopulations. On the other hand, these MSEA subpopulations were admixed with other ancestries, in particular one shared with Chinese. Subpopulation clustering using only Thai individuals and the complete marker set resolved four subpopulations, which are distributed differently across Thailand. A Sino-Thai subpopulation was concentrated in the Central region of Thailand, although this constituted a minority in an otherwise diverse region. Among the most highly differentiated markers which distinguish the Thai subpopulations, several map to regions known to affect phenotypic traits such as skin pigmentation and susceptibility to common diseases. The subpopulation patterns elucidated have important implications for evolutionary and medical genetics. The subpopulation structure within Thailand may reflect the contributions of different migrants throughout the history of MSEA. The information will also be important for genetic association studies to account for population-structure confounding effects. PMID:24223962
Maisuradze, Gia G; Leitner, David M
2007-05-15
Dihedral principal component analysis (dPCA) has recently been developed and shown to display complex features of the free energy landscape of a biomolecule that may be absent in the free energy landscape plotted in principal component space due to mixing of internal and overall rotational motion that can occur in principal component analysis (PCA) [Mu et al., Proteins: Struct Funct Bioinfo 2005;58:45-52]. Another difficulty in the implementation of PCA is sampling convergence, which we address here for both dPCA and PCA using a tetrapeptide as an example. We find that for both methods the sampling convergence can be reached over a similar time. Minima in the free energy landscape in the space of the two largest dihedral principal components often correspond to unique structures, though we also find some distinct minima to correspond to the same structure. 2007 Wiley-Liss, Inc.
Interdisciplinary education approach to the human science
NASA Astrophysics Data System (ADS)
Szu, Harold; Zheng, Yufeng; Zhang, Nian
2012-06-01
We introduced human sciences as components, and integrated them together as an interdisciplinary endeavor over decades. This year, we built a website to maintain systematically the educational research service. We captured the human sciences in various components in the SPIE proceedings over the last decades, which included: (i) ears & eyes like adaptive wavelets, (ii) brain-like unsupervised learning independent component analysis (ICA); (iii) compressive sampling spatiotemporal sparse information processing, (iv) nanoengineering approach to sensing components, (v) systems biology measurements, and (vi) biomedical wellness applications. In order to serve the interdisciplinary community better, our system approach is based on that the former recipients invited the next recipients to deliver their review talks and panel discussions. Since only the former recipients of each component can lead the nomination committees and make the final selections, we also create a leadership award which may be nominated by any conference attendance, to be approved by the conference organization committee.
Fast, Exact Bootstrap Principal Component Analysis for p > 1 million
Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim
2015-01-01
Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801
ERIC Educational Resources Information Center
Oplatka, Izhar
2017-01-01
Purpose: In order to fill the gap in theoretical and empirical knowledge about the characteristics of principal workload, the purpose of this paper is to explore the components of principal workload as well as its determinants and the coping strategies commonly used by principals to face this personal state. Design/methodology/approach:…
Atherton, Olivia E; Schofield, Thomas J; Sitka, Angela; Conger, Rand D; Robins, Richard W
2016-04-01
Despite widespread speculation about the detrimental effect of unsupervised self-care on adolescent outcomes, little is known about which children are particularly prone to problem behaviors when left at home without adult supervision. The present research used data from a longitudinal study of 674 Mexican-origin children residing in the United States to examine the prospective effect of unsupervised self-care on conduct problems, and the moderating roles of hostile aggression and gender. Results showed that unsupervised self-care was related to increases over time in conduct problems such as lying, stealing, and bullying. However, unsupervised self-care only led to conduct problems for boys and for children with an aggressive temperament. The main and interactive effects held for both mother-reported and observational-rated hostile aggression and after controlling for potential confounds. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Separated Component-Based Restoration of Speckled SAR Images
2013-01-01
unsupervised change detection from SAR amplitude imagery,” IEEE Trans. Geosci. Remote Sens., vol. 44, no. 10, pp. 2972–2982, Oct. 2006. [5] F. Argenti , T...Sens., vol. 40, no. 10, pp. 2196–2212, Oct. 2002. [13] F. Argenti and L. Alparone, “Speckle removal from SAR images in the undecimated wavelet domain...iterative thresh- olding algorithm for linear inverse problems with a sparsity con- straint,” Commun . Pure Appl. Math., vol. 57, no. 11, pp. 1413
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
Video mining using combinations of unsupervised and supervised learning techniques
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou
2003-12-01
We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.
Vega-Fernandez, Patricia; Vanderburgh White, Shana; Zelko, Frank; Ruth, Natasha M; Levy, Deborah M; Muscal, Eyal; Klein-Gitelman, Marisa S; Huber, Adam M; Tucker, Lori B; Roebuck-Spencer, Tresa; Ying, Jun; Brunner, Hermine I
2015-08-01
To develop and initially validate a global cognitive performance score (CPS) for the Pediatric Automated Neuropsychological Assessment Metrics (PedANAM) to serve as a screening tool of cognition in childhood lupus. Patients (n = 166) completed the 9 subtests of the PedANAM battery, each of which provides 3 principal performance parameters (accuracy, mean reaction time for correct responses, and throughput). Cognitive ability was measured by formal neurocognitive testing or estimated by the Pediatric Perceived Cognitive Function Questionnaire-43 to determine the presence or absence of neurocognitive dysfunction (NCD). A subset of the data was used to develop 4 candidate PedANAM-CPS indices with supervised or unsupervised statistical approaches: PedANAM-CPSUWA , i.e., unweighted averages of the accuracy scores of all PedANAM subtests; PedANAM-CPSPCA , i.e., accuracy scores of all PedANAM subtests weighted through principal components analysis; PedANAM-CPSlogit , i.e., algorithm derived from logistic models to estimate NCD status based on the accuracy scores of all of the PedANAM subtests; and PedANAM-CPSmultiscore , i.e., algorithm derived from logistic models to estimate NCD status based on select PedANAM performance parameters. PedANAM-CPS candidates were validated using the remaining data. PedANAM-CPS indices were moderately correlated with each other (|r| > 0.65). All of the PedANAM-CPS indices discriminated children by NCD status across data sets (P < 0.036). The PedANAM-CPSmultiscore had the highest area under the receiver operating characteristic curve (AUC) across all data sets for identifying NCD status (AUC >0.74), followed by the PedANAM-CPSlogit , the PedANAM-CPSPCA , and the PedANAM-CPSUWA , respectively. Based on preliminary validation and considering ease of use, the PedANAM-CPSmultiscore and the PedANAM-CPSPCA appear to be best suited as global measures of PedANAM performance. © 2015, American College of Rheumatology.
The Influence Function of Principal Component Analysis by Self-Organizing Rule.
Higuchi; Eguchi
1998-07-28
This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.
Brown, C. Erwin
1993-01-01
Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.
NASA Technical Reports Server (NTRS)
Parada, N. D. J.; Novo, E. M. L. M.
1983-01-01
Two sets of MSS/LANDSAT data with solar elevation ranging from 22 deg to 41 deg were used at the Image-100 System to implement the Eliason et alii technique for extracting the topographic modulation component. An unsupervised cluster analysis was used to obtain an average brightness image for each channel. Analysis of the enhanced imaged shows that the technique for extracting topographic modulation component is more appropriated to MSS data obtained under high sun elevation ngles. Low sun elevation increases the variance of each cluster so that the average brightness doesn't represent its albedo proprties. The topographic modulation component applied to low sun elevation angle damages rather than enhance topographic information. Better results were produced for channels 4 and 5 than for channels 6 and 7.
Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba
2003-01-01
A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.
Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo
2016-07-12
Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
40 CFR 62.14505 - What are the principal components of this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 8 2010-07-01 2010-07-01 false What are the principal components of this subpart? 62.14505 Section 62.14505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... components of this subpart? This subpart contains the eleven major components listed in paragraphs (a...
Kebir, Sied; Khurshid, Zain; Gaertner, Florian C.; Essler, Markus; Hattingen, Elke; Fimmers, Rolf; Scheffler, Björn; Herrlinger, Ulrich; Bundschuh, Ralph A.; Glas, Martin
2017-01-01
Rationale Timely detection of pseudoprogression (PSP) is crucial for the management of patients with high-grade glioma (HGG) but remains difficult. Textural features of O-(2-[18F]fluoroethyl)-L-tyrosine positron emission tomography (FET-PET) mirror tumor uptake heterogeneity; some of them may be associated with tumor progression. Methods Fourteen patients with HGG and suspected of PSP underwent FET-PET imaging. A set of 19 conventional and textural FET-PET features were evaluated and subjected to unsupervised consensus clustering. The final diagnosis of true progression vs. PSP was based on follow-up MRI using RANO criteria. Results Three robust clusters have been identified based on 10 predominantly textural FET-PET features. None of the patients with PSP fell into cluster 2, which was associated with high values for textural FET-PET markers of uptake heterogeneity. Three out of 4 patients with PSP were assigned to cluster 3 that was largely associated with low values of textural FET-PET features. By comparison, tumor-to-normal brain ratio (TNRmax) at the optimal cutoff 2.1 was less predictive of PSP (negative predictive value 57% for detecting true progression, p=0.07 vs. 75% with cluster 3, p=0.04). Principal Conclusions Clustering based on textural O-(2-[18F]fluoroethyl)-L-tyrosine PET features may provide valuable information in assessing the elusive phenomenon of pseudoprogression. PMID:28030820
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Primary Goal: Assist with the evaluation and measuring of wetlands hydroperiod at the PlumBrook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: 1) Show the relative length of hydroperiod using available remote sensing datasets 2) Date linked table of wetlands extent over time for all feasible non-forested wetlands 3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables 4) A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment 5) A MTRI style report summarizing year 2 results. This report serves as a descriptive summary of our completion of these our deliverables. Additionally, two formal meetings were held with Larry Liou and Amanda Sprinzl to provide project updates and receive direction on outputs. These were held on 2/26/15 and 9/17/15 at the Plum Brook Station. Principal Component Analysis (PCA) is a multivariate statistical technique used to identify dominant spatial and temporal backscatter signatures. PCA reduces the information contained in the temporal dataset to the first few new Principal Component (PC) images. Some advantages of PCA include the ability to filter out temporal autocorrelation and reduce speckle to the higher order PC images. A PCA was performed using ERDAS Imagine on a time series of PALSAR dates. Hydroperiod maps were created by separating the PALSAR dates into two date ranges, 2006-2008 and 2010, and performing an unsupervised classification on the PCAs.
Fiori, Simone
2003-12-01
In recent work, we introduced nonlinear adaptive activation function (FAN) artificial neuron models, which learn their activation functions in an unsupervised way by information-theoretic adapting rules. We also applied networks of these neurons to some blind signal processing problems, such as independent component analysis and blind deconvolution. The aim of this letter is to study some fundamental aspects of FAN units' learning by investigating the properties of the associated learning differential equation systems.
Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki
2004-04-01
We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.
NASA Astrophysics Data System (ADS)
Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N.
2000-12-01
Astronomical wide-field imaging performed with new large-format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of `what an object is' (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features we use a NN to select the most significant features among the large number of measured ones, and then we use these selected features to perform the classification task. In order to optimize the performance of the system, we implemented and tested several different models of NN. The comparison of the NExt performance with that of the best detection and classification package known to the authors (SExtractor) shows that NExt is at least as effective as the best traditional packages.
Laiakis, Evagelia C; Morris, Gerard A J; Fornace, Albert J; Howie, Stephen R C
2010-09-09
Pneumonia remains the leading cause of death in young children globally and improved diagnostics are needed to better identify cases and reduce case fatality. Metabolomics, a rapidly evolving field aimed at characterizing metabolites in biofluids, has the potential to improve diagnostics in a range of diseases. The objective of this pilot study is to apply metabolomic analysis to childhood pneumonia to explore its potential to improve pneumonia diagnosis in a high-burden setting. Eleven children with World Health Organization (WHO)-defined severe pneumonia of non-homogeneous aetiology were selected in The Gambia, West Africa, along with community controls. Metabolomic analysis of matched plasma and urine samples was undertaken using Ultra Performance Liquid Chromatography (UPLC) coupled to Time-of-Flight Mass Spectrometry (TOFMS). Biomarker extraction was done using SIMCA-P+ and Random Forests (RF). 'Unsupervised' (blinded) data were analyzed by Principal Component Analysis (PCA), while 'supervised' (unblinded) analysis was by Partial Least Squares-Discriminant Analysis (PLS-DA) and Orthogonal Projection to Latent Structures (OPLS). Potential markers were extracted from S-plots constructed following analysis with OPLS, and markers were chosen based on their contribution to the variation and correlation within the data set. The dataset was additionally analyzed with the machine-learning algorithm RF in order to address issues of model overfitting and markers were selected based on their variable importance ranking. Unsupervised PCA analysis revealed good separation of pneumonia and control groups, with even clearer separation of the groups with PLS-DA and OPLS analysis. Statistically significant differences (p<0.05) between groups were seen with the following metabolites: uric acid, hypoxanthine and glutamic acid were higher in plasma from cases, while L-tryptophan and adenosine-5'-diphosphate (ADP) were lower; uric acid and L-histidine were lower in urine from cases. The key limitation of this study is its small size. Metabolomic analysis clearly distinguished severe pneumonia patients from community controls. The metabolites identified are important for the host response to infection through antioxidant, inflammatory and antimicrobial pathways, and energy metabolism. Larger studies are needed to determine whether these findings are pneumonia-specific and to distinguish organism-specific responses. Metabolomics has considerable potential to improve diagnostics for childhood pneumonia.
Principals' Perceptions Regarding Their Supervision and Evaluation
ERIC Educational Resources Information Center
Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann
2015-01-01
This study examined the perceptions of principals concerning principal evaluation and supervisory feedback. Principals were asked two open-ended questions. Respondents included 82 principals in the Rocky Mountain region. The emerging themes were "Superintendent Performance," "Principal Evaluation Components," "Specific…
Post-processing interstitialcy diffusion from molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Bhardwaj, U.; Bukkuru, S.; Warrier, M.
2016-01-01
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.
Post-processing interstitialcy diffusion from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.
2016-01-15
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less
Nguyen, Phuong H
2007-05-15
Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.
Liu, Hui-lin; Wan, Xia; Yang, Gong-huan
2013-02-01
To explore the relationship between the strength of tobacco control and the effectiveness of creating smoke-free hospital, and summarize the main factors that affect the program of creating smoke-free hospitals. A total of 210 hospitals from 7 provinces/municipalities directly under the central government were enrolled in this study using stratified random sampling method. Principle component analysis and regression analysis were conducted to analyze the strength of tobacco control and the effectiveness of creating smoke-free hospitals. Two principal components were extracted in the strength of tobacco control index, which respectively reflected the tobacco control policies and efforts, and the willingness and leadership of hospital managers regarding tobacco control. The regression analysis indicated that only the first principal component was significantly correlated with the progression in creating smoke-free hospital (P<0.001), i.e. hospitals with higher scores on the first principal component had better achievements in smoke-free environment creation. Tobacco control policies and efforts are critical in creating smoke-free hospitals. The principal component analysis provides a comprehensive and objective tool for evaluating the creation of smoke-free hospitals.
Critical Factors Explaining the Leadership Performance of High-Performing Principals
ERIC Educational Resources Information Center
Hutton, Disraeli M.
2018-01-01
The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…
Molecular dynamics in principal component space.
Michielssens, Servaas; van Erp, Titus S; Kutzner, Carsten; Ceulemans, Arnout; de Groot, Bert L
2012-07-26
A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.
Optimized principal component analysis on coronagraphic images of the fomalhaut system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.
We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less
[A study of Boletus bicolor from different areas using Fourier transform infrared spectrometry].
Zhou, Zai-Jin; Liu, Gang; Ren, Xian-Pei
2010-04-01
It is hard to differentiate the same species of wild growing mushrooms from different areas by macromorphological features. In this paper, Fourier transform infrared (FTIR) spectroscopy combined with principal component analysis was used to identify 58 samples of boletus bicolor from five different areas. Based on the fingerprint infrared spectrum of boletus bicolor samples, principal component analysis was conducted on 58 boletus bicolor spectra in the range of 1 350-750 cm(-1) using the statistical software SPSS 13.0. According to the result, the accumulated contributing ratio of the first three principal components accounts for 88.87%. They included almost all the information of samples. The two-dimensional projection plot using first and second principal component is a satisfactory clustering effect for the classification and discrimination of boletus bicolor. All boletus bicolor samples were divided into five groups with a classification accuracy of 98.3%. The study demonstrated that wild growing boletus bicolor at species level from different areas can be identified by FTIR spectra combined with principal components analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, M; Wang, Xiliang
Melanoma is a malignant tumor of melanocytes with high capability of invasion and rapid metastasis to other organs. Malignant melanoma is the most common metastatic malignancy found in gastrointestinal tract (GI). To the best of our knowledge, previous studies of melanoma in gastrointestinal tract are all clinical case reports. In this work, 1H NMR-based metabolomics approach is used to investigate the metabolite profiles differences of stomach tissue extracts of metastatic B16-F10 melanoma in C57BL/6J mouse and search for specific metabolite biomarker candidates. Principal Component Analysis (PCA), an unsupervised multivariate data analysis method, is used to detect possible outliers, while Orthogonalmore » Projection to Latent Structure (OPLS), a supervised multivariate data analysis method, is employed to evaluate important metabolites responsible for discriminating the control and the melanoma groups. Both PCA and OPLS results reveal that the melanoma group can be well separated from its control group. Among the 50 identified metabolites, it is found that the concentrations of 19 metabolites are statistically and significantly changed with the levels of O-phosphocholine and hypoxanthine down-regulated while the levels of isoleucine, leucine, valine, isobutyrate, threonine, cadaverine, alanine, glutamate, glutamine, methionine, citrate, asparagine, tryptophan, glycine, serine, uracil, and formate up-regulated in the melanoma group. These significantly changed metabolites are associated with multiple biological pathways and may be potential biomarkers for metastatic melanoma in stomach.« less
Reese, H.M.; Lillesand, T.M.; Nagel, D.E.; Stewart, J.S.; Goldmann, R.A.; Simmons, T.E.; Chipman, J.W.; Tessar, P.A.
2002-01-01
Landsat Thematic Mapper (TM) data were the basis in production of a statewide land cover data set for Wisconsin, undertaken in partnership with U.S. Geological Survey's (USGS) Gap Analysis Program (GAP). The data set contained seven classes comparable to Anderson Level I and 24 classes comparable to Anderson Level II/III. Twelve scenes of dual-date TM data were processed with methods that included principal components analysis, stratification into spectrally consistent units, separate classification of upland, wetland, and urban areas, and a hybrid supervised/unsupervised classification called "guided clustering." The final data had overall accuracies of 94% for Anderson Level I upland classes, 77% for Level II/III upland classes, and 84% for Level II/III wetland classes. Classification accuracies for deciduous and coniferous forest were 95% and 93%, respectively, and forest species' overall accuracies ranged from 70% to 84%. Limited availability of acceptable imagery necessitated use of an early May date in a majority of scene pairs, perhaps contributing to lower accuracy for upland deciduous forest species. The mixed deciduous/coniferous forest class had the lowest accuracy, most likely due to distinctly classifying a purely mixed class. Mixed forest signatures containing oak were often confused with pure oak. Guided clustering was seen as an efficient classification method, especially at the tree species level, although its success relied in part on image dates, accurate ground troth, and some analyst intervention. ?? 2002 Elsevier Science Inc. All rights reserved.
Plasma metabolomic study in Chinese patients with wet age-related macular degeneration.
Luo, Dan; Deng, Tingting; Yuan, Wei; Deng, Hui; Jin, Ming
2017-09-06
Age-related macular degeneration (AMD) is a leading disease associated with blindness. It has a high incidence and complex pathogenesis. We aimed to study the metabolomic characteristics in Chinese patients with wet AMD by analyzing the morning plasma of 20 healthy controls and 20 wet AMD patients for metabolic differences. We used ultra-high-pressure liquid chromatography and quadrupole-time-of-flight mass spectrometry for this analysis. The relationship of these differences with AMD pathophysiology was also assessed. Remaining data were normalized using Pareto scaling, and then valid data were handled using multivariate data analysis with MetaboAnalysis software, including unsupervised principal component analysis and supervised partial least squares-discriminate analysis. The purpose of the present work was to identify significant metabolites for the analyses. Hierarchical clustering was conducted to identify metabolites that differed between the two groups. Significant metabolites were then identified using the established database, and features were mapped on the Kyoto Encyclopedia of Genes and Genomes. A total of 5443 ion peaks were detected, all of them attributable to the same 10 metabolites. These included some amino acids, isomaltose, hydrocortisone, and biliverdin. The heights of these peaks differed significantly between the two groups. The biosynthesis of amino acids pathways also differed profoundly between patients with wet AMD and controls. These findings suggested that metabolic profiles and and pathways differed between wet AMD and controls and may provide promising new targets for AMD-directed therapeutics and diagnostics.
ROI-Based On-Board Compression for Hyperspectral Remote Sensing Images on GPU.
Giordano, Rossella; Guccione, Pietro
2017-05-19
In recent years, hyperspectral sensors for Earth remote sensing have become very popular. Such systems are able to provide the user with images having both spectral and spatial information. The current hyperspectral spaceborne sensors are able to capture large areas with increased spatial and spectral resolution. For this reason, the volume of acquired data needs to be reduced on board in order to avoid a low orbital duty cycle due to limited storage space. Recently, literature has focused the attention on efficient ways for on-board data compression. This topic is a challenging task due to the difficult environment (outer space) and due to the limited time, power and computing resources. Often, the hardware properties of Graphic Processing Units (GPU) have been adopted to reduce the processing time using parallel computing. The current work proposes a framework for on-board operation on a GPU, using NVIDIA's CUDA (Compute Unified Device Architecture) architecture. The algorithm aims at performing on-board compression using the target's related strategy. In detail, the main operations are: the automatic recognition of land cover types or detection of events in near real time in regions of interest (this is a user related choice) with an unsupervised classifier; the compression of specific regions with space-variant different bit rates including Principal Component Analysis (PCA), wavelet and arithmetic coding; and data volume management to the Ground Station. Experiments are provided using a real dataset taken from an AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) airborne sensor in a harbor area.
El Senousy, Amira S; Farag, Mohamed A; Al-Mahdy, Dalia A; Wessjohann, Ludger A
2014-12-01
The metabolomic differences in phenolics from leaves derived from 3 artichoke cultivars (Cynara scolymus): American Green Globe, French Hyrious and Egyptian Baladi, collected at different developmental stages, were assessed using UHPLC-MS coupled to chemometrics. Ontogenic changes were considered as leaves were collected at four different time intervals and positions (top and basal) during artichoke development. Unsupervised principal component analysis (PCA) and supervised orthogonal projection to latent structures-discriminant analysis (O2PLS-DA) were used for comparing and classification of samples harvested from different cultivars at different time points and positions. A clear separation among the three investigated cultivars was revealed, with the American Green Globe samples found most enriched in caffeic acid conjugates and flavonoids vs. other cultivars. Furthermore, these metabolites also showed a marked effect on the discrimination between leaf samples from cultivars harvested at different positions, regardless of the plant age. Metabolite absolute quantifications further confirmed that discrimination was mostly influenced by phenolic compounds, namely caffeoylquinic acids and flavonoids. This study demonstrates an effect of artichoke leaf position, regardless of plant age, on its secondary metabolites composition. To the best of our knowledge, this is the first report for compositional differences among artichoke leaves, based on their positions, via a metabolomic approach and suggesting that top positioned artichoke leaves present a better source of caffeoylquinic acids, compared to basal ones. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaniu, M. I.; Angeyo, K. H.; Darby, I. G.
2018-05-01
Characterized by a variety of rock formations, namely alkaline, igneous and sedimentary that contain significant deposits of monazite and pyrochlore ores, the south coastal region of Kenya may be regarded as highly heterogeneous with regard to its geochemistry, mineralogy as well as geological morphology. The region is one of the several alkaline carbonatite complexes of Kenya that are associated with high natural background radiation and therefore radioactivity anomaly. However, this high background radiation (HBR) anomaly has hardly been systematically assessed and delineated with regard to the spatial, geological, geochemical as well as anthropogenic variability and co-dependencies. We conducted wide-ranging in-situ gamma-ray spectrometric measurements in this area. The goal of the study was to assess the radiation exposure as well as determine the underlying natural radioactivity levels in the region. In this paper we report the occurrence, exploratory analysis and modeling to assess the multivariate geo-dependence and spatial variability of the radioactivity and associated radiation exposure. Unsupervised principal component analysis and ternary plots were utilized in the study. It was observed that areas which exhibit HBR anomalies are located along the south coast paved road and in the Mrima-Kiruku complex. These areas showed a trend towards enhanced levels of 232Th and 238U and low 40K. The spatial variability of the radioactivity anomaly was found to be mainly constrained by anthropogenic activities, underlying geology and geochemical processes in the terrestrial environment.
Application of global metabolomic profiling of synovial fluid for osteoarthritis biomarkers.
Carlson, Alyssa K; Rawle, Rachel A; Adams, Erik; Greenwood, Mark C; Bothner, Brian; June, Ronald K
2018-05-05
Osteoarthritis affects over 250 million individuals worldwide. Currently, there are no options for early diagnosis of osteoarthritis, demonstrating the need for biomarker discovery. To find biomarkers of osteoarthritis in human synovial fluid, we used high performance liquid-chromatography mass spectrometry for global metabolomic profiling. Metabolites were extracted from human osteoarthritic (n = 5), rheumatoid arthritic (n = 3), and healthy (n = 5) synovial fluid, and a total of 1233 metabolites were detected. Principal components analysis clearly distinguished the metabolomic profiles of diseased from healthy synovial fluid. Synovial fluid from rheumatoid arthritis patients contained expected metabolites consistent with the inflammatory nature of the disease. Similarly, unsupervised clustering analysis found that each disease state was associated with distinct metabolomic profiles and clusters of co-regulated metabolites. For osteoarthritis, co-regulated metabolites that were upregulated compared to healthy synovial fluid mapped to known disease processes including chondroitin sulfate degradation, arginine and proline metabolism, and nitric oxide metabolism. We utilized receiver operating characteristic analysis to determine the diagnostic value of each metabolite and identified 35 metabolites as potential biomarkers of osteoarthritis, with an area under the receiver operating characteristic curve >0.9. These metabolites included phosphatidylcholine, lysophosphatidylcholine, ceramides, myristate derivatives, and carnitine derivatives. This pilot study provides strong justification for a larger cohort-based study of human osteoarthritic synovial fluid using global metabolomics. The significance of these data is the demonstration that metabolomic profiling of synovial fluid can identify relevant biomarkers of joint disease. Copyright © 2018 Elsevier Inc. All rights reserved.
Hu, M; Wang, Xiliang
2014-12-05
Melanoma is a malignant tumor of melanocytes with high capability of invasion and rapid metastasis to other organs. Malignant melanoma is the most common metastatic malignancy found in gastrointestinal tract (GI). To the best of our knowledge, previous studies of melanoma in gastrointestinal tract are all clinical case reports. In this work, 1H NMR-based metabolomics approach is used to investigate the metabolite profiles differences of stomach tissue extracts of metastatic B16-F10 melanoma in C57BL/6J mouse and search for specific metabolite biomarker candidates. Principal Component Analysis (PCA), an unsupervised multivariate data analysis method, is used to detect possible outliers, while Orthogonalmore » Projection to Latent Structure (OPLS), a supervised multivariate data analysis method, is employed to evaluate important metabolites responsible for discriminating the control and the melanoma groups. Both PCA and OPLS results reveal that the melanoma group can be well separated from its control group. Among the 50 identified metabolites, it is found that the concentrations of 19 metabolites are statistically and significantly changed with the levels of O-phosphocholine and hypoxanthine down-regulated while the levels of isoleucine, leucine, valine, isobutyrate, threonine, cadaverine, alanine, glutamate, glutamine, methionine, citrate, asparagine, tryptophan, glycine, serine, uracil, and formate up-regulated in the melanoma group. These significantly changed metabolites are associated with multiple biological pathways and may be potential biomarkers for metastatic melanoma in stomach.« less
Stress transgenerationally programs metabolic pathways linked to altered mental health.
Kiss, Douglas; Ambeskovic, Mirela; Montina, Tony; Metz, Gerlinde A S
2016-12-01
Stress is among the primary causes of mental health disorders, which are the most common reason for disability worldwide. The ubiquity of these disorders, and the costs associated with them, lends a sense of urgency to the efforts to improve prediction and prevention. Down-stream metabolic changes are highly feasible and accessible indicators of pathophysiological processes underlying mental health disorders. Here, we show that remote and cumulative ancestral stress programs central metabolic pathways linked to mental health disorders. The studies used a rat model consisting of a multigenerational stress lineage (the great-great-grandmother and each subsequent generation experienced stress during pregnancy) and a transgenerational stress lineage (only the great-great-grandmother was stressed during pregnancy). Urine samples were collected from adult male F4 offspring and analyzed using 1 H NMR spectroscopy. The results of variable importance analysis based on random variable combination were used for unsupervised multivariate principal component analysis and hierarchical clustering analysis, as well as metabolite set enrichment analysis (MSEA) and pathway analysis. We identified distinct metabolic profiles associated with the multigenerational and transgenerational stress phenotype, with consistent upregulation of hippurate and downregulation of tyrosine, threonine, and histamine. MSEA and pathway analysis showed that these metabolites are involved in catecholamine biosynthesis, immune responses, and microbial host interactions. The identification of metabolic signatures linked to ancestral programming assists in the discovery of gene targets for future studies of epigenetic regulation in pathogenic processes. Ultimately, this research can lead to biomarker discovery for better prediction and prevention of mental health disorders.
The Role of the Immune Response in the Pathogenesis of Thyroid Eye Disease: A Reassessment
Rosenbaum, James T.; Choi, Dongseok; Wong, Amanda; Wilson, David J.; Grossniklaus, Hans E.; Harrington, Christina A.; Dailey, Roger A.; Ng, John D.; Steele, Eric A.; Czyz, Craig N.; Foster, Jill A.; Tse, David; Alabiad, Chris; Dubovy, Sander; Parekh, Prashant K.; Harris, Gerald J.; Kazim, Michael; Patel, Payal J.; White, Valerie A.; Dolman, Peter J.; Edward, Deepak P.; Alkatan, Hind M.; al Hussain, Hailah; Selva, Dinesh; Yeatts, R. Patrick; Korn, Bobby S.; Kikkawa, Don O.; Stauffer, Patrick; Planck, Stephen R.
2015-01-01
Background Although thyroid eye disease is a common complication of Graves’ disease, the pathogenesis of the orbital disease is poorly understood. Most authorities implicate the immune response as an important causal factor. We sought to clarify pathogenesis by using gene expression microarray. Methods An international consortium of ocular pathologists and orbital surgeons contributed formalin fixed orbital biopsies. RNA was extracted from orbital tissue from 20 healthy controls, 25 patients with thyroid eye disease (TED), 25 patients with nonspecific orbital inflammation (NSOI), 7 patients with sarcoidosis and 6 patients with granulomatosis with polyangiitis (GPA). Tissue was divided into a discovery set and a validation set. Gene expression was quantified using Affymetrix U133 Plus 2.0 microarrays which include 54,000 probe sets. Results Principal component analysis showed that gene expression from tissue from patients with TED more closely resembled gene expression from healthy control tissue in comparison to gene expression characteristic of sarcoidosis, NSOI, or granulomatosis with polyangiitis. Unsupervised cluster dendrograms further indicated the similarity between TED and healthy controls. Heat maps based on gene expression for cytokines, chemokines, or their receptors showed that these inflammatory markers were associated with NSOI, sarcoidosis, or GPA much more frequently than with TED. Conclusion This is the first study to compare gene expression in TED to gene expression associated with other causes of exophthalmos. The juxtaposition shows that inflammatory markers are far less characteristic of TED relative to other orbital inflammatory diseases. PMID:26371757
How multi segmental patterns deviate in spastic diplegia from typical developed.
Zago, Matteo; Sforza, Chiarella; Bona, Alessia; Cimolin, Veronica; Costici, Pier Francesco; Condoluci, Claudia; Galli, Manuela
2017-10-01
The relationship between gait features and coordination in children with Cerebral Palsy is not sufficiently analyzed yet. Principal Component Analysis can help in understanding motion patterns decomposing movement into its fundamental components (Principal Movements). This study aims at quantitatively characterizing the functional connections between multi-joint gait patterns in Cerebral Palsy. 65 children with spastic diplegia aged 10.6 (SD 3.7) years participated in standardized gait analysis trials; 31 typically developing adolescents aged 13.6 (4.4) years were also tested. To determine if posture affects gait patterns, patients were split into Crouch and knee Hyperextension group according to knee flexion angle at standing. 3D coordinates of hips, knees, ankles, metatarsal joints, pelvis and shoulders were submitted to Principal Component Analysis. Four Principal Movements accounted for 99% of global variance; components 1-3 explained major sagittal patterns, components 4-5 referred to movements on frontal plane and component 6 to additional movement refinements. Dimensionality was higher in patients than in controls (p<0.01), and the Crouch group significantly differed from controls in the application of components 1 and 4-6 (p<0.05), while the knee Hyperextension group in components 1-2 and 5 (p<0.05). Compensatory strategies of children with Cerebral Palsy (interactions between main and secondary movement patterns), were objectively determined. Principal Movements can reduce the effort in interpreting gait reports, providing an immediate and quantitative picture of the connections between movement components. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Williams, D. L.; Borden, F. Y.
1977-01-01
Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.
Constrained Principal Component Analysis: Various Applications.
ERIC Educational Resources Information Center
Hunter, Michael; Takane, Yoshio
2002-01-01
Provides example applications of constrained principal component analysis (CPCA) that illustrate the method on a variety of contexts common to psychological research. Two new analyses, decompositions into finer components and fitting higher order structures, are presented, followed by an illustration of CPCA on contingency tables and the CPCA of…
NASA Technical Reports Server (NTRS)
Brumfield, J. O.; Bloemer, H. H. L.; Campbell, W. J.
1981-01-01
Two unsupervised classification procedures for analyzing Landsat data used to monitor land reclamation in a surface mining area in east central Ohio are compared for agreement with data collected from the corresponding locations on the ground. One procedure is based on a traditional unsupervised-clustering/maximum-likelihood algorithm sequence that assumes spectral groupings in the Landsat data in n-dimensional space; the other is based on a nontraditional unsupervised-clustering/canonical-transformation/clustering algorithm sequence that not only assumes spectral groupings in n-dimensional space but also includes an additional feature-extraction technique. It is found that the nontraditional procedure provides an appreciable improvement in spectral groupings and apparently increases the level of accuracy in the classification of land cover categories.
NASA Astrophysics Data System (ADS)
Ginanjar, Irlandia; Pasaribu, Udjianna S.; Indratno, Sapto W.
2017-03-01
This article presents the application of the principal component analysis (PCA) biplot for the needs of data mining. This article aims to simplify and objectify the methods for objects clustering in PCA biplot. The novelty of this paper is to get a measure that can be used to objectify the objects clustering in PCA biplot. Orthonormal eigenvectors, which are the coefficients of a principal component model representing an association between principal components and initial variables. The existence of the association is a valid ground to objects clustering based on principal axes value, thus if m principal axes used in the PCA, then the objects can be classified into 2m clusters. The inter-city buses are clustered based on maintenance costs data by using two principal axes PCA biplot. The buses are clustered into four groups. The first group is the buses with high maintenance costs, especially for lube, and brake canvass. The second group is the buses with high maintenance costs, especially for tire, and filter. The third group is the buses with low maintenance costs, especially for lube, and brake canvass. The fourth group is buses with low maintenance costs, especially for tire, and filter.
Kakio, Tomoko; Nagase, Hitomi; Takaoka, Takashi; Yoshida, Naoko; Hirakawa, Junichi; Macha, Susan; Hiroshima, Takashi; Ikeda, Yukihiro; Tsuboi, Hirohito; Kimura, Kazuko
2018-06-01
The World Health Organization has warned that substandard and falsified medical products (SFs) can harm patients and fail to treat the diseases for which they were intended, and they affect every region of the world, leading to loss of confidence in medicines, health-care providers, and health systems. Therefore, development of analytical procedures to detect SFs is extremely important. In this study, we investigated the quality of pharmaceutical tablets containing the antihypertensive candesartan cilexetil, collected in China, Indonesia, Japan, and Myanmar, using the Japanese pharmacopeial analytical procedures for quality control, together with principal component analysis (PCA) of Raman spectrum obtained with handheld Raman spectrometer. Some samples showed delayed dissolution and failed to meet the pharmacopeial specification, whereas others failed the assay test. These products appeared to be substandard. Principal component analysis showed that all Raman spectra could be explained in terms of two components: the amount of the active pharmaceutical ingredient and the kinds of excipients. Principal component analysis score plot indicated one substandard, and the falsified tablets have similar principal components in Raman spectra, in contrast to authentic products. The locations of samples within the PCA score plot varied according to the source country, suggesting that manufacturers in different countries use different excipients. Our results indicate that the handheld Raman device will be useful for detection of SFs in the field. Principal component analysis of that Raman data clarify the difference in chemical properties between good quality products and SFs that circulate in the Asian market.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
Meyer, Karin; Kirkpatrick, Mark
2005-01-01
Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566
Morin, R.H.
1997-01-01
Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.
NASA Astrophysics Data System (ADS)
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida
2015-05-01
Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.
One-Channel Surface Electromyography Decomposition for Muscle Force Estimation.
Sun, Wentao; Zhu, Jinying; Jiang, Yinlai; Yokoi, Hiroshi; Huang, Qiang
2018-01-01
Estimating muscle force by surface electromyography (sEMG) is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs) in two steps: (1) learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2) extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.
AHaH computing-from metastable switches to attractors to machine learning.
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
AHaH Computing–From Metastable Switches to Attractors to Machine Learning
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315
Analysis and Evaluation of the Characteristic Taste Components in Portobello Mushroom.
Wang, Jinbin; Li, Wen; Li, Zhengpeng; Wu, Wenhui; Tang, Xueming
2018-05-10
To identify the characteristic taste components of the common cultivated mushroom (brown; Portobello), Agaricus bisporus, taste components in the stipe and pileus of Portobello mushroom harvested at different growth stages were extracted and identified, and principal component analysis (PCA) and taste active value (TAV) were used to reveal the characteristic taste components during the each of the growth stages of Portobello mushroom. In the stipe and pileus, 20 and 14 different principal taste components were identified, respectively, and they were considered as the principal taste components of Portobello mushroom fruit bodies, which included most amino acids and 5'-nucleotides. Some taste components that were found at high levels, such as lactic acid and citric acid, were not detected as Portobello mushroom principal taste components through PCA. However, due to their high content, Portobello mushroom could be used as a source of organic acids. The PCA and TAV results revealed that 5'-GMP, glutamic acid, malic acid, alanine, proline, leucine, and aspartic acid were the characteristic taste components of Portobello mushroom fruit bodies. Portobello mushroom was also found to be rich in protein and amino acids, so it might also be useful in the formulation of nutraceuticals and functional food. The results in this article could provide a theoretical basis for understanding and regulating the characteristic flavor components synthesis process of Portobello mushroom. © 2018 Institute of Food Technologists®.
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.
2015-12-01
The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
ERIC Educational Resources Information Center
Mugrage, Beverly; And Others
Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…
A Note on McDonald's Generalization of Principal Components Analysis
ERIC Educational Resources Information Center
Shine, Lester C., II
1972-01-01
It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…
Supervised versus unsupervised categorization: two sides of the same coin?
Pothos, Emmanuel M; Edwards, Darren J; Perlman, Amotz
2011-09-01
Supervised and unsupervised categorization have been studied in separate research traditions. A handful of studies have attempted to explore a possible convergence between the two. The present research builds on these studies, by comparing the unsupervised categorization results of Pothos et al. ( 2011 ; Pothos et al., 2008 ) with the results from two procedures of supervised categorization. In two experiments, we tested 375 participants with nine different stimulus sets and examined the relation between ease of learning of a classification, memory for a classification, and spontaneous preference for a classification. After taking into account the role of the number of category labels (clusters) in supervised learning, we found the three variables to be closely associated with each other. Our results provide encouragement for researchers seeking unified theoretical explanations for supervised and unsupervised categorization, but raise a range of challenging theoretical questions.
Peterson, Leif E
2002-01-01
CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816
The Complexity of Human Walking: A Knee Osteoarthritis Study
Kotti, Margarita; Duffell, Lynsey D.; Faisal, Aldo A.; McGregor, Alison H.
2014-01-01
This study proposes a framework for deconstructing complex walking patterns to create a simple principal component space before checking whether the projection to this space is suitable for identifying changes from the normality. We focus on knee osteoarthritis, the most common knee joint disease and the second leading cause of disability. Knee osteoarthritis affects over 250 million people worldwide. The motivation for projecting the highly dimensional movements to a lower dimensional and simpler space is our belief that motor behaviour can be understood by identifying a simplicity via projection to a low principal component space, which may reflect upon the underlying mechanism. To study this, we recruited 180 subjects, 47 of which reported that they had knee osteoarthritis. They were asked to walk several times along a walkway equipped with two force plates that capture their ground reaction forces along 3 axes, namely vertical, anterior-posterior, and medio-lateral, at 1000 Hz. Data when the subject does not clearly strike the force plate were excluded, leaving 1–3 gait cycles per subject. To examine the complexity of human walking, we applied dimensionality reduction via Probabilistic Principal Component Analysis. The first principal component explains 34% of the variance in the data, whereas over 80% of the variance is explained by 8 principal components or more. This proves the complexity of the underlying structure of the ground reaction forces. To examine if our musculoskeletal system generates movements that are distinguishable between normal and pathological subjects in a low dimensional principal component space, we applied a Bayes classifier. For the tested cross-validated, subject-independent experimental protocol, the classification accuracy equals 82.62%. Also, a novel complexity measure is proposed, which can be used as an objective index to facilitate clinical decision making. This measure proves that knee osteoarthritis subjects exhibit more variability in the two-dimensional principal component space. PMID:25232949
Principal Components Analysis of a JWST NIRSpec Detector Subsystem
NASA Technical Reports Server (NTRS)
Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting;
2013-01-01
We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Unsupervised automated high throughput phenotyping of RNAi time-lapse movies.
Failmezger, Henrik; Fröhlich, Holger; Tresch, Achim
2013-10-04
Gene perturbation experiments in combination with fluorescence time-lapse cell imaging are a powerful tool in reverse genetics. High content applications require tools for the automated processing of the large amounts of data. These tools include in general several image processing steps, the extraction of morphological descriptors, and the grouping of cells into phenotype classes according to their descriptors. This phenotyping can be applied in a supervised or an unsupervised manner. Unsupervised methods are suitable for the discovery of formerly unknown phenotypes, which are expected to occur in high-throughput RNAi time-lapse screens. We developed an unsupervised phenotyping approach based on Hidden Markov Models (HMMs) with multivariate Gaussian emissions for the detection of knockdown-specific phenotypes in RNAi time-lapse movies. The automated detection of abnormal cell morphologies allows us to assign a phenotypic fingerprint to each gene knockdown. By applying our method to the Mitocheck database, we show that a phenotypic fingerprint is indicative of a gene's function. Our fully unsupervised HMM-based phenotyping is able to automatically identify cell morphologies that are specific for a certain knockdown. Beyond the identification of genes whose knockdown affects cell morphology, phenotypic fingerprints can be used to find modules of functionally related genes.
Unsupervised learning on scientific ocean drilling datasets from the South China Sea
NASA Astrophysics Data System (ADS)
Tse, Kevin C.; Chiu, Hon-Chim; Tsang, Man-Yin; Li, Yiliang; Lam, Edmund Y.
2018-06-01
Unsupervised learning methods were applied to explore data patterns in multivariate geophysical datasets collected from ocean floor sediment core samples coming from scientific ocean drilling in the South China Sea. Compared to studies on similar datasets, but using supervised learning methods which are designed to make predictions based on sample training data, unsupervised learning methods require no a priori information and focus only on the input data. In this study, popular unsupervised learning methods including K-means, self-organizing maps, hierarchical clustering and random forest were coupled with different distance metrics to form exploratory data clusters. The resulting data clusters were externally validated with lithologic units and geologic time scales assigned to the datasets by conventional methods. Compact and connected data clusters displayed varying degrees of correspondence with existing classification by lithologic units and geologic time scales. K-means and self-organizing maps were observed to perform better with lithologic units while random forest corresponded best with geologic time scales. This study sets a pioneering example of how unsupervised machine learning methods can be used as an automatic processing tool for the increasingly high volume of scientific ocean drilling data.
An Efficient Optimization Method for Solving Unsupervised Data Classification Problems.
Shabanzadeh, Parvaneh; Yusof, Rubiyah
2015-01-01
Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.
Semi-supervised and unsupervised extreme learning machines.
Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng
2014-12-01
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
Unsupervised chunking based on graph propagation from bilingual corpus.
Zhu, Ling; Wong, Derek F; Chao, Lidia S
2014-01-01
This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.
An unsupervised classification technique for multispectral remote sensing data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Cummings, R. E.
1973-01-01
Description of a two-part clustering technique consisting of (a) a sequential statistical clustering, which is essentially a sequential variance analysis, and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum-likelihood classification techniques.
Unsupervised classification of earth resources data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Jayroe, R. R., Jr.; Cummings, R. E.
1972-01-01
A new clustering technique is presented. It consists of two parts: (a) a sequential statistical clustering which is essentially a sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by existing supervised maximum liklihood classification technique.
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2017-06-01
Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.
Pepper seed variety identification based on visible/near-infrared spectral technology
NASA Astrophysics Data System (ADS)
Li, Cuiling; Wang, Xiu; Meng, Zhijun; Fan, Pengfei; Cai, Jichen
2016-11-01
Pepper is a kind of important fruit vegetable, with the expansion of pepper hybrid planting area, detection of pepper seed purity is especially important. This research used visible/near infrared (VIS/NIR) spectral technology to detect the variety of single pepper seed, and chose hybrid pepper seeds "Zhuo Jiao NO.3", "Zhuo Jiao NO.4" and "Zhuo Jiao NO.5" as research sample. VIS/NIR spectral data of 80 "Zhuo Jiao NO.3", 80 "Zhuo Jiao NO.4" and 80 "Zhuo Jiao NO.5" pepper seeds were collected, and the original spectral data was pretreated with standard normal variable (SNV) transform, first derivative (FD), and Savitzky-Golay (SG) convolution smoothing methods. Principal component analysis (PCA) method was adopted to reduce the dimension of the spectral data and extract principal components, according to the distribution of the first principal component (PC1) along with the second principal component(PC2) in the twodimensional plane, similarly, the distribution of PC1 coupled with the third principal component(PC3), and the distribution of PC2 combined with PC3, distribution areas of three varieties of pepper seeds were divided in each twodimensional plane, and the discriminant accuracy of PCA was tested through observing the distribution area of samples' principal components in validation set. This study combined PCA and linear discriminant analysis (LDA) to identify single pepper seed varieties, results showed that with the FD preprocessing method, the discriminant accuracy of pepper seed varieties was 98% for validation set, it concludes that using VIS/NIR spectral technology is feasible for identification of single pepper seed varieties.
Long, J.M.; Fisher, W.L.
2006-01-01
We present a method for spatial interpretation of environmental variation in a reservoir that integrates principal components analysis (PCA) of environmental data with geographic information systems (GIS). To illustrate our method, we used data from a Great Plains reservoir (Skiatook Lake, Oklahoma) with longitudinal variation in physicochemical conditions. We measured 18 physicochemical features, mapped them using GIS, and then calculated and interpreted four principal components. Principal component 1 (PC1) was readily interpreted as longitudinal variation in water chemistry, but the other principal components (PC2-4) were difficult to interpret. Site scores for PC1-4 were calculated in GIS by summing weighted overlays of the 18 measured environmental variables, with the factor loadings from the PCA as the weights. PC1-4 were then ordered into a landscape hierarchy, an emergent property of this technique, which enabled their interpretation. PC1 was interpreted as a reservoir scale change in water chemistry, PC2 was a microhabitat variable of rip-rap substrate, PC3 identified coves/embayments and PC4 consisted of shoreline microhabitats related to slope. The use of GIS improved our ability to interpret the more obscure principal components (PC2-4), which made the spatial variability of the reservoir environment more apparent. This method is applicable to a variety of aquatic systems, can be accomplished using commercially available software programs, and allows for improved interpretation of the geographic environmental variability of a system compared to using typical PCA plots. ?? Copyright by the North American Lake Management Society 2006.
Hand classification of fMRI ICA noise components.
Griffanti, Ludovica; Douaud, Gwenaëlle; Bijsterbosch, Janine; Evangelisti, Stefania; Alfaro-Almagro, Fidel; Glasser, Matthew F; Duff, Eugene P; Fitzgibbon, Sean; Westphal, Robert; Carone, Davide; Beckmann, Christian F; Smith, Stephen M
2017-07-01
We present a practical "how-to" guide to help determine whether single-subject fMRI independent components (ICs) characterise structured noise or not. Manual identification of signal and noise after ICA decomposition is required for efficient data denoising: to train supervised algorithms, to check the results of unsupervised ones or to manually clean the data. In this paper we describe the main spatial and temporal features of ICs and provide general guidelines on how to evaluate these. Examples of signal and noise components are provided from a wide range of datasets (3T data, including examples from the UK Biobank and the Human Connectome Project, and 7T data), together with practical guidelines for their identification. Finally, we discuss how the data quality, data type and preprocessing can influence the characteristics of the ICs and present examples of particularly challenging datasets. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J
2003-09-01
As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.
Leslie, Toby; Rab, Mohammad Abdur; Ahmadzai, Hayat; Durrani, Naeem; Fayaz, Mohammad; Kolaczinski, Jan; Rowland, Mark
2004-03-01
The only available treatment that can eliminate the latent hypnozoite reservoir of vivax malaria is a 14 d course of primaquine (PQ). A potential problem with long-course chemotherapy is the issue of compliance after clinical symptoms have subsided. The present study, carried out at an Afghan refugee camp in Pakistan, between June 2000 and August 2001, compared 14 d treatment in supervised and unsupervised groups in which compliance was monitored by comparison of relapse rates. Clinical cases recruited by passive case detection were randomised by family to placebo, supervised, or unsupervised groups, and treated with chloroquine (25 mg/kg) over 3 days to eliminate erythrocytic stages. Individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency were excluded from the trial. Cases allocated to supervision were given directly observed treatment (0.25 mg PQ/kg body weight) once per day for 14 days. Cases allocated to the unsupervised group were provided with 14 PQ doses upon enrollment and strongly advised to complete the course. A total of 595 cases were enrolled. After 9 months of follow up PQ proved equally protective against further episodes of P. vivax in supervised (odds ratio 0.35, 95% CI 0.21-0.57) and unsupervised (odds ratio 0.37, 95% CI 0.23-0.59) groups as compared to placebo. All age groups on supervised or unsupervised treatment showed a similar degree of protection even though the risk of relapse decreased with age. The study showed that a presumed problem of poor compliance may be overcome with simple health messages even when the majority of individuals are illiterate and without formal education. Unsupervised treatment with 14-day PQ when combined with simple instruction can avert a significant amount of the morbidity associated with relapse in populations where G6PD deficiency is either absent or readily diagnosable.
True Zero-Training Brain-Computer Interfacing – An Online Study
Kindermans, Pieter-Jan; Schreuder, Martijn; Schrauwen, Benjamin; Müller, Klaus-Robert; Tangermann, Michael
2014-01-01
Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model. PMID:25068464
2017-01-01
Introduction This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Methods Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child’s transition, child involvement in transition, child autonomy, school ethos, professionals’ involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Results Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), ‘child inclusive ethos,’ contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43–7.18, p<0.0001). Discussion To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning. PMID:28636649
Ravenscroft, John; Wazny, Kerri; Davis, John M
2017-01-01
This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child's transition, child involvement in transition, child autonomy, school ethos, professionals' involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), 'child inclusive ethos,' contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43-7.18, p<0.0001). To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning.
Unsupervised Biomedical Named Entity Recognition: Experiments with Clinical and Biological Texts
Zhang, Shaodian; Elhadad, Nóemie
2013-01-01
Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work. PMID:23954592
NASA Astrophysics Data System (ADS)
Cooper, L. A.; Ballantyne, A.
2017-12-01
Forest disturbances are critical components of ecosystems. Knowledge of their prevalence and impacts is necessary to accurately describe forest health and ecosystem services through time. While there are currently several methods available to identify and describe forest disturbances, especially those which occur in North America, the process remains inefficient and inaccessible in many parts of the world. Here, we introduce a preliminary approach to streamline and automate both the detection and attribution of forest disturbances. We use a combination of the Breaks for Additive Season and Trend (BFAST) detection algorithm to detect disturbances in combination with supervised and unsupervised classification algorithms to attribute the detections to disturbance classes. Both spatial and temporal disturbance characteristics are derived and utilized for the goal of automating the disturbance attribution process. The resulting preliminary algorithm is applied to up-scaled (100m) Landsat data for several different ecosystems in North America, with varying success. Our results indicate that supervised classification is more reliable than unsupervised classification, but that limited training data are required for a region. Future work will improve the algorithm through refining and validating at sites within North America before applying this approach globally.
Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch
2014-03-01
Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.
Principal components of wrist circumduction from electromagnetic surgical tracking.
Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E
2017-02-01
An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.
Hall, L O; Bensaid, A M; Clarke, L P; Velthuizen, R P; Silbiger, M S; Bezdek, J C
1992-01-01
Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms, and a supervised computational neural network. Initial clinical results are presented on normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. For a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed, with fuzz-c-means approaches being slightly preferred over feedforward cascade correlation results. Various facets of both approaches, such as supervised versus unsupervised learning, time complexity, and utility for the diagnostic process, are compared.
Nicholson, Vaughan Patrick; McKean, Mark; Lowe, John; Fawcett, Christine; Burkett, Brendan
2015-01-01
To determine the effectiveness of unsupervised Nintendo Wii Fit balance training in older adults. Forty-one older adults were recruited from local retirement villages and educational settings to participate in a six-week two-group repeated measures study. The Wii group (n = 19, 75 ± 6 years) undertook 30 min of unsupervised Wii balance gaming three times per week in their retirement village while the comparison group (n = 22, 74 ± 5 years) continued with their usual exercise program. Participants' balance abilities were assessed pre- and postintervention. The Wii Fit group demonstrated significant improvements (P < .05) in timed up-and-go, left single-leg balance, lateral reach (left and right), and gait speed compared with the comparison group. Reported levels of enjoyment following game play increased during the study. Six weeks of unsupervised Wii balance training is an effective modality for improving balance in independent older adults.
Assessing the Linguistic Productivity of Unsupervised Deep Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Lawrence A.; Hodas, Nathan O.
Increasingly, cognitive scientists have demonstrated interest in applying tools from deep learning. One use for deep learning is in language acquisition where it is useful to know if a linguistic phenomenon can be learned through domain-general means. To assess whether unsupervised deep learning is appropriate, we first pose a smaller question: Can unsupervised neural networks apply linguistic rules productively, using them in novel situations. We draw from the literature on determiner/noun productivity by training an unsupervised, autoencoder network measuring its ability to combine nouns with determiners. Our simple autoencoder creates combinations it has not previously encountered, displaying a degree ofmore » overlap similar to actual children. While this preliminary work does not provide conclusive evidence for productivity, it warrants further investigation with more complex models. Further, this work helps lay the foundations for future collaboration between the deep learning and cognitive science communities.« less
NASA Astrophysics Data System (ADS)
Brolly, M.; Iro, S.
2016-12-01
This study presents novel low budget methodologies for mapping and monitoring gully erosion development in South-East Nigeria. The unabated way gullies develop, and the lack of control measures in the SE Nigeria study area, motivates this work. The Landsat archive is used to determine change in land-use/cover classification over a 30-year period (1986-2015) in a region measuring 70km x 70km. Multi-resolution segmentation is enabled through Object Based Image Analysis (OBIA) and Pixel based classification techniques (supervised/unsupervised) using an initial dataset including 40 ground validated gully sites within the region. Detected increases in gully area are positively correlated with land clearance, manifested by associated vegetation reduction and anthropogenic encroachment with r values reported of -0.94 (p<0.05) and -0.97 (p<0.05) for the Pixel and OBIA classification approaches respectively. Within the study region 14 specific gullies are further vectorised and quantified in terms of extent and rates of change. Local and regional results are then examined in regard to land-use and environmental variables, such as meteorology, soil and rock geology, and topographical/landscape parameters. Of the 14 specific sites, the maximum reported erosion rates are 232010m2 per year for the largest gully (4123765m2) and -501m2 per year for the smallest (2749m2), representing year on year % increases of 9% and -0.15% respectively. These erosion rates were exhibited in 1988 and 2007. Analysis of topography across the region at 30m resolution reveals 90% of the 40 observed gullies develop on concave slopes with high values of 4 plan curvatures and greater than 15° inclines with highest erosion rates exhibited on ferralsols soil type. Principal Component Analysis reveals inter-variable similarities, via component 1, between Slope (58%), Elevation (50%) and Gully Area (62%), while, Vegetation loss (14%), Soil structure (8%) and Rate of gully change (3%) are better defined by the second component, showing their similarities.
Introduction to uses and interpretation of principal component analyses in forest biology.
J. G. Isebrands; Thomas R. Crow
1975-01-01
The application of principal component analysis for interpretation of multivariate data sets is reviewed with emphasis on (1) reduction of the number of variables, (2) ordination of variables, and (3) applications in conjunction with multiple regression.
Principal component analysis of phenolic acid spectra
USDA-ARS?s Scientific Manuscript database
Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
NASA Astrophysics Data System (ADS)
Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen
2017-09-01
Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.
NASA Astrophysics Data System (ADS)
Ueki, Kenta; Iwamori, Hikaru
2017-10-01
In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.
Sayago, Ana; González-Domínguez, Raúl; Beltrán, Rafael; Fernández-Recamales, Ángeles
2018-09-30
This work explores the potential of multi-element fingerprinting in combination with advanced data mining strategies to assess the geographical origin of extra virgin olive oil samples. For this purpose, the concentrations of 55 elements were determined in 125 oil samples from multiple Spanish geographic areas. Several unsupervised and supervised multivariate statistical techniques were used to build classification models and investigate the relationship between mineral composition of olive oils and their provenance. Results showed that Spanish extra virgin olive oils exhibit characteristic element profiles, which can be differentiated on the basis of their origin in accordance with three geographical areas: Atlantic coast (Huelva province), Mediterranean coast and inland regions. Furthermore, statistical modelling yielded high sensitivity and specificity, principally when random forest and support vector machines were employed, thus demonstrating the utility of these techniques in food traceability and authenticity research. Copyright © 2018 Elsevier Ltd. All rights reserved.
Zhang, Zhao; Yan, Shuicheng; Zhao, Mingbo
2014-05-01
Latent Low-Rank Representation (LatLRR) delivers robust and promising results for subspace recovery and feature extraction through mining the so-called hidden effects, but the locality of both similar principal and salient features cannot be preserved in the optimizations. To solve this issue for achieving enhanced performance, a boosted version of LatLRR, referred to as Regularized Low-Rank Representation (rLRR), is proposed through explicitly including an appropriate Laplacian regularization that can maximally preserve the similarity among local features. Resembling LatLRR, rLRR decomposes given data matrix from two directions by seeking a pair of low-rank matrices. But the similarities of principal and salient features can be effectively preserved by rLRR. As a result, the correlated features are well grouped and the robustness of representations is also enhanced. Based on the outputted bi-directional low-rank codes by rLRR, an unsupervised subspace learning framework termed Low-rank Similarity Preserving Projections (LSPP) is also derived for feature learning. The supervised extension of LSPP is also discussed for discriminant subspace learning. The validity of rLRR is examined by robust representation and decomposition of real images. Results demonstrated the superiority of our rLRR and LSPP in comparison to other related state-of-the-art algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Remote Sensing for Mapping Soybean Crop in the Brazilian Cerrado
NASA Astrophysics Data System (ADS)
Trabaquini, K.; Bernardes, T.; Mello, M. P.; Formaggio, A.; Rosa, V. G.
2011-12-01
The soybean expansion in the Brazilian Cerrado has been strongly affected by internal and external markets. The main factors driving that expansion are the climatic conditions, the development of technologies and genetic improvement. Recent studies have shown that the soybean expansion has become a major cause of reduction of native vegetation in Mato Grosso State - Brazil, responding for 17% of deforestation from 2000 to 2004. This work aims to map soybean areas in the Brazilian Cerrado in Mato Grosso State, using MODIS data. Thirteen MODIS images (MOD13 - 16 days composition), acquired from September, 2005 to March, 2006, were used to run principal component analysis (PCA) in order to reduce the dimensionality of the data. The first three components (PC1, PC2 and PC3), which contained about 90% of data variability were segmented and utilized as input for an unsupervised classification using the ISOSEG classifier, implemented in the SPRING software. Eighty field work points were randomly selected for the accuracy assessment. An intersection between the soybean map and a map generated by the "Project Monitoring Deforestation of Brazilian Biomes Satellite - PMDBBS", which aimed at identifying anthropic areas, was conducted in order to evaluate the distribution of soybeans within those areas. Moreover a soil map was used in order to evaluate the soybean distribution over the classes of soil. The classification result presented overall index of 83% and the kappa coefficient of 0.64 for the soybean map, which presented a total soybean area of about 42,317 square kilometers. Furthermore, it was verified that 27% of anthropic area was covered by soybean. In relation to the soil analysis, 87% of the total soybean area was planted in Oxisoils. Despite the economic gain related to the soybean production, an adequate management is needed to avoid soil acidification, soil erosion and pollution, aiming at providing a sustainable environment.
ERIC Educational Resources Information Center
Kronenberger, William G.; Thompson, Robert J., Jr.; Morrow, Catherine
1997-01-01
A principal components analysis of the Family Environment Scale (FES) (R. Moos and B. Moos, 1994) was performed using 113 undergraduates. Research supported 3 broad components encompassing the 10 FES subscales. These results supported previous research and the generalization of the FES to college samples. (SLD)
Time series analysis of collective motions in proteins
NASA Astrophysics Data System (ADS)
Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.
2004-01-01
The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.
Burst and Principal Components Analyses of MEA Data Separates Chemicals by Class
Microelectrode arrays (MEAs) detect drug and chemical induced changes in action potential "spikes" in neuronal networks and can be used to screen chemicals for neurotoxicity. Analytical "fingerprinting," using Principal Components Analysis (PCA) on spike trains recorded from prim...
EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES
An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.
Nguyen, Phuong H
2006-12-01
Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.
Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica
2016-04-19
The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.
NASA Astrophysics Data System (ADS)
Li, Jiangtong; Luo, Yongdao; Dai, Honglin
2018-01-01
Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.
Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar
2017-06-01
We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Use of multivariate statistics to identify unreliable data obtained using CASA.
Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón
2013-06-01
In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.
Liu, Xiang; Guo, Ling-Peng; Zhang, Fei-Yun; Ma, Jie; Mu, Shu-Yong; Zhao, Xin; Li, Lan-Hai
2015-02-01
Eight physical and chemical indicators related to water quality were monitored from nineteen sampling sites along the Kunes River at the end of snowmelt season in spring. To investigate the spatial distribution characteristics of water physical and chemical properties, cluster analysis (CA), discriminant analysis (DA) and principal component analysis (PCA) are employed. The result of cluster analysis showed that the Kunes River could be divided into three reaches according to the similarities of water physical and chemical properties among sampling sites, representing the upstream, midstream and downstream of the river, respectively; The result of discriminant analysis demonstrated that the reliability of such a classification was high, and DO, Cl- and BOD5 were the significant indexes leading to this classification; Three principal components were extracted on the basis of the principal component analysis, in which accumulative variance contribution could reach 86.90%. The result of principal component analysis also indicated that water physical and chemical properties were mostly affected by EC, ORP, NO3(-) -N, NH4(+) -N, Cl- and BOD5. The sorted results of principal component scores in each sampling sites showed that the water quality was mainly influenced by DO in upstream, by pH in midstream, and by the rest of indicators in downstream. The order of comprehensive scores for principal components revealed that the water quality degraded from the upstream to downstream, i.e., the upstream had the best water quality, followed by the midstream, while the water quality at downstream was the worst. This result corresponded exactly to the three reaches classified using cluster analysis. Anthropogenic activity and the accumulation of pollutants along the river were probably the main reasons leading to this spatial difference.
Putilov, Arcady A; Donskaya, Olga G
2016-01-01
Age-associated changes in different bandwidths of the human electroencephalographic (EEG) spectrum are well documented, but their functional significance is poorly understood. This spectrum seems to represent summation of simultaneous influences of several sleep-wake regulatory processes. Scoring of its orthogonal (uncorrelated) principal components can help in separation of the brain signatures of these processes. In particular, the opposite age-associated changes were documented for scores on the two largest (1st and 2nd) principal components of the sleep EEG spectrum. A decrease of the first score and an increase of the second score can reflect, respectively, the weakening of the sleep drive and disinhibition of the opposing wake drive with age. In order to support the suggestion of age-associated disinhibition of the wake drive from the antagonistic influence of the sleep drive, we analyzed principal component scores of the resting EEG spectra obtained in sleep deprivation experiments with 81 healthy young adults aged between 19 and 26 and 40 healthy older adults aged between 45 and 66 years. At the second day of the sleep deprivation experiments, frontal scores on the 1st principal component of the EEG spectrum demonstrated an age-associated reduction of response to eyes closed relaxation. Scores on the 2nd principal component were either initially increased during wakefulness or less responsive to such sleep-provoking conditions (frontal and occipital scores, respectively). These results are in line with the suggestion of disinhibition of the wake drive with age. They provide an explanation of why older adults are less vulnerable to sleep deprivation than young adults.
NASA Astrophysics Data System (ADS)
Wojciechowski, Adam
2017-04-01
In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.
A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...
Rosacea assessment by erythema index and principal component analysis segmentation maps
NASA Astrophysics Data System (ADS)
Kuzmina, Ilona; Rubins, Uldis; Saknite, Inga; Spigulis, Janis
2017-12-01
RGB images of rosacea were analyzed using segmentation maps of principal component analysis (PCA) and erythema index (EI). Areas of segmented clusters were compared to Clinician's Erythema Assessment (CEA) values given by two dermatologists. The results show that visible blood vessels are segmented more precisely on maps of the erythema index and the third principal component (PC3). In many cases, a distribution of clusters on EI and PC3 maps are very similar. Mean values of clusters' areas on these maps show a decrease of the area of blood vessels and erythema and an increase of lighter skin area after the therapy for the patients with diagnosis CEA = 2 on the first visit and CEA=1 on the second visit. This study shows that EI and PC3 maps are more useful than the maps of the first (PC1) and second (PC2) principal components for indicating vascular structures and erythema on the skin of rosacea patients and therapy monitoring.
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang
2018-04-01
A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
[Content of mineral elements of Gastrodia elata by principal components analysis].
Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei
2015-03-01
To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.
Visualizing Hyolaryngeal Mechanics in Swallowing Using Dynamic MRI
Pearson, William G.; Zumwalt, Ann C.
2013-01-01
Introduction Coordinates of anatomical landmarks are captured using dynamic MRI to explore whether a proposed two-sling mechanism underlies hyolaryngeal elevation in pharyngeal swallowing. A principal components analysis (PCA) is applied to coordinates to determine the covariant function of the proposed mechanism. Methods Dynamic MRI (dMRI) data were acquired from eleven healthy subjects during a repeated swallows task. Coordinates mapping the proposed mechanism are collected from each dynamic (frame) of a dynamic MRI swallowing series of a randomly selected subject in order to demonstrate shape changes in a single subject. Coordinates representing minimum and maximum hyolaryngeal elevation of all 11 subjects were also mapped to demonstrate shape changes of the system among all subjects. MophoJ software was used to perform PCA and determine vectors of shape change (eigenvectors) for elements of the two-sling mechanism of hyolaryngeal elevation. Results For both single subject and group PCAs, hyolaryngeal elevation accounted for the first principal component of variation. For the single subject PCA, the first principal component accounted for 81.5% of the variance. For the between subjects PCA, the first principal component accounted for 58.5% of the variance. Eigenvectors and shape changes associated with this first principal component are reported. Discussion Eigenvectors indicate that two-muscle slings and associated skeletal elements function as components of a covariant mechanism to elevate the hyolaryngeal complex. Morphological analysis is useful to model shape changes in the two-sling mechanism of hyolaryngeal elevation. PMID:25090608
V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis
NASA Astrophysics Data System (ADS)
Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.
2011-09-01
In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.
NASA Astrophysics Data System (ADS)
Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul
2018-04-01
Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
Characterization of human breast cancer tissues by infrared imaging.
Verdonck, M; Denayer, A; Delvaux, B; Garaud, S; De Wind, R; Desmedt, C; Sotiriou, C; Willard-Gallo, K; Goormaghtigh, E
2016-01-21
Fourier Transform InfraRed (FTIR) spectroscopy coupled to microscopy (IR imaging) has shown unique advantages in detecting morphological and molecular pathologic alterations in biological tissues. The aim of this study was to evaluate the potential of IR imaging as a diagnostic tool to identify characteristics of breast epithelial cells and the stroma. In this study a total of 19 breast tissue samples were obtained from 13 patients. For 6 of the patients, we also obtained Non-Adjacent Non-Tumor tissue samples. Infrared images were recorded on the main cell/tissue types identified in all breast tissue samples. Unsupervised Principal Component Analyses and supervised Partial Least Square Discriminant Analyses (PLS-DA) were used to discriminate spectra. Leave-one-out cross-validation was used to evaluate the performance of PLS-DA models. Our results show that IR imaging coupled with PLS-DA can efficiently identify the main cell types present in FFPE breast tissue sections, i.e. epithelial cells, lymphocytes, connective tissue, vascular tissue and erythrocytes. A second PLS-DA model could distinguish normal and tumor breast epithelial cells in the breast tissue sections. A patient-specific model reached particularly high sensitivity, specificity and MCC rates. Finally, we showed that the stroma located close or at distance from the tumor exhibits distinct spectral characteristics. In conclusion FTIR imaging combined with computational algorithms could be an accurate, rapid and objective tool to identify/quantify breast epithelial cells and differentiate tumor from normal breast tissue as well as normal from tumor-associated stroma, paving the way to the establishment of a potential complementary tool to ensure safe tumor margins.
How specific Raman spectroscopic models are: a comparative study between different cancers
NASA Astrophysics Data System (ADS)
Singh, S. P.; Kumar, K. Kalyan; Chowdary, M. V. P.; Maheedhar, K.; Krishna, C. Murali
2010-02-01
Optical spectroscopic methods are being contemplated as adjunct/ alternative to existing 'Gold standard' of cancer diagnosis, histopathological examination. Several groups are actively pursuing diagnostic applications of Ramanspectroscopy in cancers. We have developed Raman spectroscopic models for diagnosis of breast, oral, stomach, colon and larynx cancers. So far, specificity and applicability of spectral- models has been limited to particular tissue origin. In this study we have evaluated explicitly of spectroscopic-models by analyzing spectra from already developed spectralmodels representing normal and malignant tissues of breast (46), cervix (52), colon (25), larynx (53), and oral (47). Spectral data was analyzed by Principal Component Analysis (PCA) using scores of factor, Mahalanobis distance and Spectral residuals as discriminating parameters. Multiparametric limit test approach was also explored. The preliminary unsupervised PCA of pooled data indicates that normal tissue types were always exclusive from their malignant counterparts. But when we consider tissue of different origin, large overlap among clusters was found. Supervised analysis by Mahalanobis distance and spectral residuals gave similar results. The 'limit test' approach where classification is based on match / mis-match of the given spectrum against all the available spectra has revealed that spectral models are very exclusive and specific. For example breast normal spectral model show matches only with breast normal spectra and mismatch to rest of the spectra. Same pattern was seen for most of spectral models. Therefore, results of the study indicate the exclusiveness and efficacy of Raman spectroscopic-models. Prospectively, these findings might open new application of Raman spectroscopic models in identifying a tumor as primary or metastatic.
Dyslexic Participants Show Intact Spontaneous Categorization Processes
ERIC Educational Resources Information Center
Nikolopoulos, Dimitris S.; Pothos, Emmanuel M.
2009-01-01
We examine the performance of dyslexic participants on an unsupervised categorization task against that of matched non-dyslexic control participants. Unsupervised categorization is a cognitive process critical for conceptual development. Existing research in dyslexia has emphasized perceptual tasks and supervised categorization tasks (for which…
Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G
2012-11-13
We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular variables in the same way (functional capillary density, RBCV and RBCV(max)). Fasting plasma glucose appeared to be related to principal component 4 and did not show any association with microvascular reactivity. In non-diabetic female subjects, a multivariate scenario of associations between classic clinical variables strictly related to obesity and metabolic syndrome suggests a significant relationship between these diseases and microvascular reactivity.
The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.
Bagley, C
1980-03-01
The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.
ERIC Educational Resources Information Center
McCormick, Ernest J.; And Others
The study deals with the job component method of establishing compensation rates. The basic job analysis questionnaire used in the study was the Position Analysis Questionnaire (PAQ) (Form B). On the basis of a principal components analysis of PAQ data for a large sample (2,688) of jobs, a number of principal components (job dimensions) were…
ERIC Educational Resources Information Center
Faginski-Stark, Erica; Casavant, Christopher; Collins, William; McCandless, Jason; Tencza, Marilyn
2012-01-01
Recent federal and state mandates have tasked school systems to move beyond principal evaluation as a bureaucratic function and to re-imagine it as a critical component to improve principal performance and compel school renewal. This qualitative study investigated the district leaders' and principals' perceptions of the performance evaluation…
Housing and sexual health among street-involved youth.
Kumar, Maya M; Nisenbaum, Rosane; Barozzino, Tony; Sgro, Michael; Bonifacio, Herbert J; Maguire, Jonathon L
2015-10-01
Street-involved youth (SIY) carry a disproportionate burden of sexually transmitted diseases (STD). Studies among adults suggest that improving housing stability may be an effective primary prevention strategy for improving sexual health. Housing options available to SIY offer varying degrees of stability and adult supervision. This study investigated whether housing options offering more stability and adult supervision are associated with fewer STD and related risk behaviors among SIY. A cross-sectional study was performed using public health survey and laboratory data collected from Toronto SIY in 2010. Three exposure categories were defined a priori based on housing situation: (1) stable and supervised housing, (2) stable and unsupervised housing, and (3) unstable and unsupervised housing. Multivariate logistic regression was used to test the association between housing category and current or recent STD. Secondary analyses were performed using the following secondary outcomes: blood-borne infection, recent binge-drinking, and recent high-risk sexual behavior. The final analysis included 184 SIY. Of these, 28.8 % had a current or recent STD. Housing situation was stable and supervised for 12.5 %, stable and unsupervised for 46.2 %, and unstable and unsupervised for 41.3 %. Compared to stable and supervised housing, there was no significant association between current or recent STD among stable and unsupervised housing or unstable and unsupervised housing. There was no significant association between housing category and risk of blood-borne infection, binge-drinking, or high-risk sexual behavior. Although we did not demonstrate a significant association between stable and supervised housing and lower STD risk, our incorporation of both housing stability and adult supervision into a priori defined exposure groups may inform future studies of housing-related prevention strategies among SIY. Multi-modal interventions beyond housing alone may also be required to prevent sexual morbidity among these vulnerable youth.
Out-of-School Time and Adolescent Substance Use.
Lee, Kenneth T H; Vandell, Deborah Lowe
2015-11-01
High levels of adolescent substance use are linked to lower academic achievement, reduced schooling, and delinquency. We assess four types of out-of-school time (OST) contexts--unsupervised time with peers, sports, organized activities, and paid employment--in relation to tobacco, alcohol, and marijuana use at the end of high school. Other research has examined these OST contexts in isolation, limiting efforts to disentangle potentially confounded relations. Longitudinal data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development (N = 766) examined associations between different OST contexts during high school and substance use at the end of high school. Unsupervised time with peers increased the odds of tobacco, alcohol, and marijuana use, whereas sports increased the odds of alcohol use and decreased the odds of marijuana use. Paid employment increased the odds of tobacco and alcohol use. Unsupervised time with peers predicted increased amounts of tobacco, alcohol, and marijuana use, whereas sports predicted decreased amounts of tobacco and marijuana use and increased amounts of alcohol use at the end of high school. Although unsupervised time with peers, sports, and paid employment were differentially linked to the odds of substance use, only unsupervised time with peers and sports were significantly associated with the amounts of tobacco, alcohol, and marijuana use at the end of high school. These findings underscore the value of considering OST contexts in relation to strategies to promote adolescent health. Reducing unsupervised time with peers and increasing sports participation may have positive impacts on reducing substance use. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Hübner, David; Verhoeven, Thibault; Schmid, Konstantin; Müller, Klaus-Robert; Tangermann, Michael; Kindermans, Pieter-Jan
2017-01-01
Using traditional approaches, a brain-computer interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g., by subject-to-subject transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder, and thus to achieve a reliable unsupervised calibrationless decoding with a guarantee to recover the true class means. We introduce learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We present a visual ERP speller to meet the requirements of LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with 13 subjects performing a copy-spelling task. Theoretical considerations show that LLP is guaranteed to minimize the loss function similar to a corresponding supervised classifier. LLP performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. The continuously adapting LLP classifier is the first unsupervised decoder for ERP BCIs guaranteed to find the optimal decoder. This makes it an ideal solution to avoid tedious calibration sessions. Additionally, LLP works on complementary principles compared to existing unsupervised methods, opening the door for their further enhancement when combined with LLP.
Verhoeven, Thibault; Schmid, Konstantin; Müller, Klaus-Robert; Tangermann, Michael; Kindermans, Pieter-Jan
2017-01-01
Objective Using traditional approaches, a brain-computer interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g., by subject-to-subject transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder, and thus to achieve a reliable unsupervised calibrationless decoding with a guarantee to recover the true class means. Method We introduce learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We present a visual ERP speller to meet the requirements of LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with 13 subjects performing a copy-spelling task. Results Theoretical considerations show that LLP is guaranteed to minimize the loss function similar to a corresponding supervised classifier. LLP performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. Significance The continuously adapting LLP classifier is the first unsupervised decoder for ERP BCIs guaranteed to find the optimal decoder. This makes it an ideal solution to avoid tedious calibration sessions. Additionally, LLP works on complementary principles compared to existing unsupervised methods, opening the door for their further enhancement when combined with LLP. PMID:28407016
BORAWSKI, ELAINE A.; IEVERS-LANDIS, CAROLYN E.; LOVEGREEN, LOREN D.; TRAPL, ERIKA S.
2010-01-01
Purpose To compare two different parenting practices (parental monitoring and negotiated unsupervised time) and perceived parental trust in the reporting of health risk behaviors among adolescents. Methods Data were derived from 692 adolescents in 9th and 10th grades (X̄ = 15.7 years) enrolled in health education classes in six urban high schools. Students completed a self-administered paper-based survey that assessed adolescents’ perceptions of the degree to which their parents monitor their whereabouts, are permitted to negotiate unsupervised time with their friends and trust them to make decisions. Using gender-specific multivariate logistic regression analyses, we examined the relative importance of parental monitoring, negotiated unsupervised time with peers, and parental trust in predicting reported sexual activity, sex-related protective actions (e.g., condom use, carrying protection) and substance use (alcohol, tobacco, and marijuana). Results For males and females, increased negotiated unsupervised time was strongly associated with increased risk behavior (e.g., sexual activity, alcohol and marijuana use) but also sex-related protective actions. In males, high parental monitoring was associated with less alcohol use and consistent condom use. Parental monitoring had no affect on female behavior. Perceived parental trust served as a protective factor against sexual activity, tobacco, and marijuana use in females, and alcohol use in males. Conclusions Although monitoring is an important practice for parents of older adolescents, managing their behavior through negotiation of unsupervised time may have mixed results leading to increased experimentation with sexuality and substances, but perhaps in a more responsible way. Trust established between an adolescent female and her parents continues to be a strong deterrent for risky behaviors but appears to have little effect on behaviors of adolescent males. PMID:12890596
2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.
Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen
2017-09-19
A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.
Effect of noise in principal component analysis with an application to ozone pollution
NASA Astrophysics Data System (ADS)
Tsakiri, Katerina G.
This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
NASA Astrophysics Data System (ADS)
Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.
2017-06-01
The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.
Information extraction from multivariate images
NASA Technical Reports Server (NTRS)
Park, S. K.; Kegley, K. A.; Schiess, J. R.
1986-01-01
An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.
Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid
2016-10-01
In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.
Unsupervised iterative detection of land mines in highly cluttered environments.
Batman, Sinan; Goutsias, John
2003-01-01
An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.
Principal Component Clustering Approach to Teaching Quality Discriminant Analysis
ERIC Educational Resources Information Center
Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan
2016-01-01
Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…
Analysis of the principal component algorithm in phase-shifting interferometry.
Vargas, J; Quiroga, J Antonio; Belenguer, T
2011-06-15
We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.
Psychometric Measurement Models and Artificial Neural Networks
ERIC Educational Resources Information Center
Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.
2004-01-01
The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…
Microelectrode arrays (MEAs) detect drug and chemical induced changes in neuronal network function and have been used for neurotoxicity screening. As a proof-•of-concept, the current study assessed the utility of analytical "fingerprinting" using Principal Components Analysis (P...
Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Dynamic competitive probabilistic principal components analysis.
López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel
2009-04-01
We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.
Rathleff, C R; Bandholm, T; Spaich, E G; Jorgensen, M; Andreasen, J
2017-01-01
Frailty is a serious condition frequently present in geriatric inpatients that potentially causes serious adverse events. Strength training is acknowledged as a means of preventing or delaying frailty and loss of function in these patients. However, limited hospital resources challenge the amount of supervised training, and unsupervised training could possibly supplement supervised training thereby increasing the total exercise dose during admission. A new valid and reliable technology, the BandCizer, objectively measures the exact training dosage performed. The purpose was to investigate feasibility and acceptability of an unsupervised progressive strength training intervention monitored by BandCizer for frail geriatric inpatients. This feasibility trial included 15 frail inpatients at a geriatric ward. At hospitalization, the patients were prescribed two elastic band exercises to be performed unsupervised once daily. A BandCizer Datalogger enabling measurement of the number of sets, repetitions, and time-under-tension was attached to the elastic band. The patients were instructed in performing strength training: 3 sets of 10 repetitions (10-12 repetition maximum (RM)) with a separation of 2-min pauses and a time-under-tension of 8 s. The feasibility criterion for the unsupervised progressive exercises was that 33% of the recommended number of sets would be performed by at least 30% of patients. In addition, patients and staff were interviewed about their experiences with the intervention. Four (27%) out of 15 patients completed 33% of the recommended number of sets. For the total sample, the average percent of performed sets was 23% and for those who actually trained ( n = 12) 26%. Patients and staff expressed a general positive attitude towards the unsupervised training as an addition to the supervised training sessions. However, barriers were also described-especially constant interruptions. Based on the predefined criterion for feasibility, the unsupervised training was not feasible, although the criterion was almost met. The patients and staff mainly expressed positive attitudes towards the unsupervised training. As even a small training dosage has been shown to improve the physical performance of geriatric inpatients, the proposed intervention might be relevant if the interruptions are decreased in future large-scale trials and if the adherence is increased. ClinicalTrials.gov: NCT02702557, February 29, 2016. Data Protection Agency: 2016-42, February 25, 2016. Ethics Committee: No registration needed, December 8, 2015 (e-mail correspondence).
NASA Astrophysics Data System (ADS)
Keyport, Ren N.; Oommen, Thomas; Martha, Tapas R.; Sajinkumar, K. S.; Gierke, John S.
2018-02-01
A comparative analysis of landslides detected by pixel-based and object-oriented analysis (OOA) methods was performed using very high-resolution (VHR) remotely sensed aerial images for the San Juan La Laguna, Guatemala, which witnessed widespread devastation during the 2005 Hurricane Stan. A 3-band orthophoto of 0.5 m spatial resolution together with a 115 field-based landslide inventory were used for the analysis. A binary reference was assigned with a zero value for landslide and unity for non-landslide pixels. The pixel-based analysis was performed using unsupervised classification, which resulted in 11 different trial classes. Detection of landslides using OOA includes 2-step K-means clustering to eliminate regions based on brightness; elimination of false positives using object properties such as rectangular fit, compactness, length/width ratio, mean difference of objects, and slope angle. Both overall accuracy and F-score for OOA methods outperformed pixel-based unsupervised classification methods in both landslide and non-landslide classes. The overall accuracy for OOA and pixel-based unsupervised classification was 96.5% and 94.3%, respectively, whereas the best F-score for landslide identification for OOA and pixel-based unsupervised methods: were 84.3% and 77.9%, respectively.Results indicate that the OOA is able to identify the majority of landslides with a few false positive when compared to pixel-based unsupervised classification.
A principal components model of soundscape perception.
Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta
2010-11-01
There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.
Feature Extraction Using an Unsupervised Neural Network
1991-05-03
with this neural netowrk is given and its connection to exploratory projection pursuit methods is established. DD I 2 P JA d 73 EDITIONj Of I NOV 6s...IS OBSOLETE $IN 0102- LF- 014- 6601 SECURITY CLASSIFICATION OF THIS PAGE (When Daoes Enlered) Feature Extraction using an Unsupervised Neural Network
An Unsupervised Method for Uncovering Morphological Chains (Open Access, Publisher’s Version)
2015-03-08
Consortium. Marco Baroni, Johannes Matiasek, and Harald Trost. 2002. Unsupervised discovery of morphologically re- lated words based on orthographic and...Better word representations with re- cursive neural networks for morphology. In CoNLL, Sofia, Bulgaria. Mohamed Maamouri, Ann Bies, Hubert Jin, and Tim
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-28
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide-ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
NASA Astrophysics Data System (ADS)
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-01
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
SAS program for quantitative stratigraphic correlation by principal components
Hohn, M.E.
1985-01-01
A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.
NASA Astrophysics Data System (ADS)
Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.
2018-05-01
An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.
A novel principal component analysis for spatially misaligned multivariate air pollution data.
Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A
2017-01-01
We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.
Principals' Perceptions of Collegial Support as a Component of Administrative Inservice.
ERIC Educational Resources Information Center
Daresh, John C.
To address the problem of increasing professional isolation of building administrators, the Principals' Inservice Project helps establish principals' collegial support groups across the nation. The groups are typically composed of 6 to 10 principals who meet at least once each month over a 2-year period. One collegial support group of seven…
Training the Trainers: Learning to Be a Principal Supervisor
ERIC Educational Resources Information Center
Saltzman, Amy
2017-01-01
While most principal supervisors are former principals themselves, few come to the role with specific training in how to do the job effectively. For this reason, both the Washington, D.C., and Tulsa, Oklahoma, principal supervisor programs include a strong professional development component. In this article, the author takes a look inside these…
ERIC Educational Resources Information Center
Rodrigue, Christine M.
2011-01-01
This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…
Backhausen, Mette G; Katballe, Malene; Hansson, Helena; Tabor, Ann; Damm, Peter; Hegaard, Hanne K
2014-12-01
Low back pain during pregnancy is common and associated with sick leave. Studies suggest that exercise may reduce low back pain during pregnancy. Before carrying out a randomised controlled trail with individual water exercise as intervention a qualitative feasibility study was done. To explore women's views and experiences of the acceptability and benefits of and possible barriers to the standardised individual unsupervised water exercise intervention. Eleven women were interviewed after participating in a water exercise intervention. Content analysis was used. Four main categories emerged: motivation to participate, attitudes towards the exercise programme, perception of benefits, and acceptability of supportive components. The women had a desire to stay physically active during pregnancy and found water exercise a suitable, type of exercise to perform during pregnancy. The intervention was experienced to have benefits on both their physical health and their mental well-being. Crowded swimming pools were perceived as the greatest barrier. It is feasible to perform a RCT using the described intervention. The intervention was accepted by the participants because it supported their desire to be physically active during pregnancy. The main barrier was crowded swimming pools and this issue must be addressed in a future RCT. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ackermann, Margot Elise; Morrow, Jennifer Ann
2008-01-01
The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.
2015-11-01
The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.
Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis
NASA Astrophysics Data System (ADS)
Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.
2013-06-01
Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.
2007-01-01
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2012 CFR
2012-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2014 CFR
2014-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
40 CFR 60.1580 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines..., 1999 Use of Model Rule § 60.1580 What are the principal components of the model rule? The model rule...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
Students' Perceptions of Teaching and Learning Practices: A Principal Component Approach
ERIC Educational Resources Information Center
Mukorera, Sophia; Nyatanga, Phocenah
2017-01-01
Students' attendance and engagement with teaching and learning practices is perceived as a critical element for academic performance. Even with stipulated attendance policies, students still choose not to engage. The study employed a principal component analysis to analyze first- and second-year students' perceptions of the importance of the 12…
ERIC Educational Resources Information Center
Hunley-Jenkins, Keisha Janine
2012-01-01
This qualitative study explores large, urban, mid-western principal perspectives about cyberbullying and the policy components and practices that they have found effective and ineffective at reducing its occurrence and/or negative effect on their schools' learning environments. More specifically, the researcher was interested in learning more…
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
Learning Principal Component Analysis by Using Data from Air Quality Networks
ERIC Educational Resources Information Center
Perez-Arribas, Luis Vicente; Leon-González, María Eugenia; Rosales-Conrado, Noelia
2017-01-01
With the final objective of using computational and chemometrics tools in the chemistry studies, this paper shows the methodology and interpretation of the Principal Component Analysis (PCA) using pollution data from different cities. This paper describes how students can obtain data on air quality and process such data for additional information…
Applications of Nonlinear Principal Components Analysis to Behavioral Data.
ERIC Educational Resources Information Center
Hicks, Marilyn Maginley
1981-01-01
An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)
ERIC Educational Resources Information Center
Hendrix, Dean
2010-01-01
This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…
Principal component analysis for protein folding dynamics.
Maisuradze, Gia G; Liwo, Adam; Scheraga, Harold A
2009-01-09
Protein folding is considered here by studying the dynamics of the folding of the triple beta-strand WW domain from the Formin-binding protein 28. Starting from the unfolded state and ending either in the native or nonnative conformational states, trajectories are generated with the coarse-grained united residue (UNRES) force field. The effectiveness of principal components analysis (PCA), an already established mathematical technique for finding global, correlated motions in atomic simulations of proteins, is evaluated here for coarse-grained trajectories. The problems related to PCA and their solutions are discussed. The folding and nonfolding of proteins are examined with free-energy landscapes. Detailed analyses of many folding and nonfolding trajectories at different temperatures show that PCA is very efficient for characterizing the general folding and nonfolding features of proteins. It is shown that the first principal component captures and describes in detail the dynamics of a system. Anomalous diffusion in the folding/nonfolding dynamics is examined by the mean-square displacement (MSD) and the fractional diffusion and fractional kinetic equations. The collisionless (or ballistic) behavior of a polypeptide undergoing Brownian motion along the first few principal components is accounted for.
Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters.
Tao, Dapeng; Lin, Xu; Jin, Lianwen; Li, Xuelong
2016-03-01
Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.
Dynamic of consumer groups and response of commodity markets by principal component analysis
NASA Astrophysics Data System (ADS)
Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo
2017-09-01
This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao
2011-01-01
The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
Hyperspectral optical imaging of human iris in vivo: characteristics of reflectance spectra
NASA Astrophysics Data System (ADS)
Medina, José M.; Pereira, Luís M.; Correia, Hélder T.; Nascimento, Sérgio M. C.
2011-07-01
We report a hyperspectral imaging system to measure the reflectance spectra of real human irises with high spatial resolution. A set of ocular prosthesis was used as the control condition. Reflectance data were decorrelated by the principal-component analysis. The main conclusion is that spectral complexity of the human iris is considerable: between 9 and 11 principal components are necessary to account for 99% of the cumulative variance in human irises. Correcting image misalignments associated with spontaneous ocular movements did not influence this result. The data also suggests a correlation between the first principal component and different levels of melanin present in the irises. It was also found that although the spectral characteristics of the first five principal components were not affected by the radial and angular position of the selected iridal areas, they affect the higher-order ones, suggesting a possible influence of the iris texture. The results show that hyperspectral imaging in the iris, together with adequate spectroscopic analyses provide more information than conventional colorimetric methods, making hyperspectral imaging suitable for the characterization of melanin and the noninvasive diagnosis of ocular diseases and iris color.
Seeing wholes: The concept of systems thinking and its implementation in school leadership
NASA Astrophysics Data System (ADS)
Shaked, Haim; Schechter, Chen
2013-12-01
Systems thinking (ST) is an approach advocating thinking about any given issue as a whole, emphasising the interrelationships between its components rather than the components themselves. This article aims to link ST and school leadership, claiming that ST may enable school principals to develop highly performing schools that can cope successfully with current challenges, which are more complex than ever before in today's era of accountability and high expectations. The article presents the concept of ST - its definition, components, history and applications. Thereafter, its connection to education and its contribution to school management are described. The article concludes by discussing practical processes including screening for ST-skilled principal candidates and developing ST skills among prospective and currently performing school principals, pinpointing three opportunities for skills acquisition: during preparatory programmes; during their first years on the job, supported by veteran school principals as mentors; and throughout their entire career. Such opportunities may not only provide school principals with ST skills but also improve their functioning throughout the aforementioned stages of professional development.
A modified procedure for mixture-model clustering of regional geochemical data
Ellefsen, Karl J.; Smith, David B.; Horton, John D.
2014-01-01
A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.
2003-04-01
A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}
Xiao, Keke; Chen, Yun; Jiang, Xie; Zhou, Yan
2017-03-01
An investigation was conducted for 20 different types of sludge in order to identify the key organic compounds in extracellular polymeric substances (EPS) that are important in assessing variations of sludge filterability. The different types of sludge varied in initial total solids (TS) content, organic composition and pre-treatment methods. For instance, some of the sludges were pre-treated by acid, ultrasonic, thermal, alkaline, or advanced oxidation technique. The Pearson's correlation results showed significant correlations between sludge filterability and zeta potential, pH, dissolved organic carbon, protein and polysaccharide in soluble EPS (SB EPS), loosely bound EPS (LB EPS) and tightly bound EPS (TB EPS). The principal component analysis (PCA) method was used to further explore correlations between variables and similarities among EPS fractions of different types of sludge. Two principal components were extracted: principal component 1 accounted for 59.24% of total EPS variations, while principal component 2 accounted for 25.46% of total EPS variations. Dissolved organic carbon, protein and polysaccharide in LB EPS showed higher eigenvector projection values than the corresponding compounds in SB EPS and TB EPS in principal component 1. Further characterization of fractionized key organic compounds in LB EPS was conducted with size-exclusion chromatography-organic carbon detection-organic nitrogen detection (LC-OCD-OND). A numerical multiple linear regression model was established to describe relationship between organic compounds in LB EPS and sludge filterability. Copyright © 2016 Elsevier Ltd. All rights reserved.
QSAR modeling of flotation collectors using principal components extracted from topological indices.
Natarajan, R; Nirdosh, Inderjit; Basak, Subhash C; Mills, Denise R
2002-01-01
Several topological indices were calculated for substituted-cupferrons that were tested as collectors for the froth flotation of uranium. The principal component analysis (PCA) was used for data reduction. Seven principal components (PC) were found to account for 98.6% of the variance among the computed indices. The principal components thus extracted were used in stepwise regression analyses to construct regression models for the prediction of separation efficiencies (Es) of the collectors. A two-parameter model with a correlation coefficient of 0.889 and a three-parameter model with a correlation coefficient of 0.913 were formed. PCs were found to be better than partition coefficient to form regression equations, and inclusion of an electronic parameter such as Hammett sigma or quantum mechanically derived electronic charges on the chelating atoms did not improve the correlation coefficient significantly. The method was extended to model the separation efficiencies of mercaptobenzothiazoles (MBT) and aminothiophenols (ATP) used in the flotation of lead and zinc ores, respectively. Five principal components were found to explain 99% of the data variability in each series. A three-parameter equation with correlation coefficient of 0.985 and a two-parameter equation with correlation coefficient of 0.926 were obtained for MBT and ATP, respectively. The amenability of separation efficiencies of chelating collectors to QSAR modeling using PCs based on topological indices might lead to the selection of collectors for synthesis and testing from a virtual database.
Akbari, Hamed; Macyszyn, Luke; Da, Xiao; Wolf, Ronald L.; Bilello, Michel; Verma, Ragini; O’Rourke, Donald M.
2014-01-01
Purpose To augment the analysis of dynamic susceptibility contrast material–enhanced magnetic resonance (MR) images to uncover unique tissue characteristics that could potentially facilitate treatment planning through a better understanding of the peritumoral region in patients with glioblastoma. Materials and Methods Institutional review board approval was obtained for this study, with waiver of informed consent for retrospective review of medical records. Dynamic susceptibility contrast-enhanced MR imaging data were obtained for 79 patients, and principal component analysis was applied to the perfusion signal intensity. The first six principal components were sufficient to characterize more than 99% of variance in the temporal dynamics of blood perfusion in all regions of interest. The principal components were subsequently used in conjunction with a support vector machine classifier to create a map of heterogeneity within the peritumoral region, and the variance of this map served as the heterogeneity score. Results The calculated principal components allowed near-perfect separability of tissue that was likely highly infiltrated with tumor and tissue that was unlikely infiltrated with tumor. The heterogeneity map created by using the principal components showed a clear relationship between voxels judged by the support vector machine to be highly infiltrated and subsequent recurrence. The results demonstrated a significant correlation (r = 0.46, P < .0001) between the heterogeneity score and patient survival. The hazard ratio was 2.23 (95% confidence interval: 1.4, 3.6; P < .01) between patients with high and low heterogeneity scores on the basis of the median heterogeneity score. Conclusion Analysis of dynamic susceptibility contrast-enhanced MR imaging data by using principal component analysis can help identify imaging variables that can be subsequently used to evaluate the peritumoral region in glioblastoma. These variables are potentially indicative of tumor infiltration and may become useful tools in guiding therapy, as well as individualized prognostication. © RSNA, 2014 PMID:24955928
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R
2010-01-01
The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.
Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.
Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang
2018-01-01
This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and compared with a state-of-art method, that is, PICCS, using both simulation and experimental data with the on-board cone-beam CT setting. The results demonstrated the feasibility of PCR for cine CBCT and significantly improved reconstruction quality of PCR from PICCS for cine CBCT. With a priori estimated temporal motion coefficients using fluoroscopic training projections, the PCR method can accurately reconstruct spatial principal components, and then generate cine CT images as a product of temporal motion coefficients and spatial principal components. © 2017 American Association of Physicists in Medicine.
ERIC Educational Resources Information Center
Butz, Martin V.; Herbort, Oliver; Hoffmann, Joachim
2007-01-01
Autonomously developing organisms face several challenges when learning reaching movements. First, motor control is learned unsupervised or self-supervised. Second, knowledge of sensorimotor contingencies is acquired in contexts in which action consequences unfold in time. Third, motor redundancies must be resolved. To solve all 3 of these…
Bilingual Lexical Interactions in an Unsupervised Neural Network Model
ERIC Educational Resources Information Center
Zhao, Xiaowei; Li, Ping
2010-01-01
In this paper we present an unsupervised neural network model of bilingual lexical development and interaction. We focus on how the representational structures of the bilingual lexicons can emerge, develop, and interact with each other as a function of the learning history. The results show that: (1) distinct representations for the two lexicons…
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-01-01
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-06-13
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).
ERIC Educational Resources Information Center
Lin, Mind-Dih
2012-01-01
Improving principal leadership is a vital component to the success of educational reform initiatives that seek to improve whole-school performance, as principal leadership often exercises positive but indirect effects on student learning. Because of the importance of principals within the field of school improvement, this article focuses on…
ERIC Educational Resources Information Center
Herrmann, Mariesa; Ross, Christine
2016-01-01
States and districts across the country are implementing new principal evaluation systems that include measures of the quality of principals' school leadership practices and measures of student achievement growth. Because these evaluation systems will be used for high-stakes decisions, it is important that the component measures of the evaluation…
ERIC Educational Resources Information Center
Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann; Mette, Ian M.
2015-01-01
This study examined the perspectives of novice and late career principals concerning instructional and organizational leadership within their performance evaluations. An online survey was sent to 251 principals with a return rate of 49%. Instructional leadership components of the evaluation that were most important to all principals were:…
Hadley, Wendy; Houck, Christopher D; Barker, David; Senocak, Natali
2015-06-01
The purpose of this study was to examine the moderating influence of parental monitoring (e.g., unsupervised time with opposite sex peers) and adolescent emotional competence on sexual behaviors, among a sample of at-risk early adolescents. This study included 376 seventh-grade adolescents (age, 12-14 years) with behavioral or emotional difficulties. Questionnaires were completed on private laptop computers and assessed adolescent Emotional Competence (including Regulation and Negativity/Lability), Unsupervised Time, and a range of Sexual Behaviors. Generalized linear models were used to evaluate the independent and combined influence of Emotional Competency and Unsupervised Time on adolescent report of Sexual Behaviors. Analyses were stratified by gender to account for the notable gender differences in the targeted moderators and outcome variables. Findings indicated that more unsupervised time was a risk factor for all youth but was influenced by an adolescent's ability to regulate their emotions. Specifically, for males and females, poorer Emotion Regulation was associated with having engaged in a greater variety of Sexual Behaviors. However, lower Negativity/Lability and >1× per week Unsupervised Time were associated with a higher number of sexual behaviors among females only. Based on the findings of this study, a lack of parental supervision seems to be particularly problematic for both male and female adolescents with poor emotion regulation abilities. It may be important to impact both emotion regulation abilities and increase parental knowledge and skills associated with effective monitoring to reduce risk-taking for these youth.
A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images
Tang, Yunwei; Jing, Linhai; Ding, Haifeng
2017-01-01
The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416
ERIC Educational Resources Information Center
Chou, Yeh-Tai; Wang, Wen-Chung
2010-01-01
Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…
ERIC Educational Resources Information Center
Brusco, Michael J.; Singh, Renu; Steinley, Douglas
2009-01-01
The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.
1984-01-01
A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
Unsupervised discovery of information structure in biomedical documents.
Kiela, Douwe; Guo, Yufan; Stenius, Ulla; Korhonen, Anna
2015-04-01
Information structure (IS) analysis is a text mining technique, which classifies text in biomedical articles into categories that capture different types of information, such as objectives, methods, results and conclusions of research. It is a highly useful technique that can support a range of Biomedical Text Mining tasks and can help readers of biomedical literature find information of interest faster, accelerating the highly time-consuming process of literature review. Several approaches to IS analysis have been presented in the past, with promising results in real-world biomedical tasks. However, all existing approaches, even weakly supervised ones, require several hundreds of hand-annotated training sentences specific to the domain in question. Because biomedicine is subject to considerable domain variation, such annotations are expensive to obtain. This makes the application of IS analysis across biomedical domains difficult. In this article, we investigate an unsupervised approach to IS analysis and evaluate the performance of several unsupervised methods on a large corpus of biomedical abstracts collected from PubMed. Our best unsupervised algorithm (multilevel-weighted graph clustering algorithm) performs very well on the task, obtaining over 0.70 F scores for most IS categories when applied to well-known IS schemes. This level of performance is close to that of lightly supervised IS methods and has proven sufficient to aid a range of practical tasks. Thus, using an unsupervised approach, IS could be applied to support a wide range of tasks across sub-domains of biomedicine. We also demonstrate that unsupervised learning brings novel insights into IS of biomedical literature and discovers information categories that are not present in any of the existing IS schemes. The annotated corpus and software are available at http://www.cl.cam.ac.uk/∼dk427/bio14info.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Smart, Daniel J; Gill, Nicholas D
2013-03-01
The aims of the study were to determine if a supervised off-season conditioning program enhanced gains in physical characteristics compared with the same program performed in an unsupervised manner and to establish the persistence of the physical changes after a 6-month unsupervised competition period. Forty-four provincial representative adolescent rugby union players (age, mean ± SD, 15.3 ± 1.3 years) participated in a 15-week off-season conditioning program either under supervision from an experienced strength and conditioning coach or unsupervised. Measures of body composition, strength, vertical jump, speed, and anaerobic and aerobic running performance were taken, before, immediately after, and 6 months after the conditioning. Post conditioning program the supervised group had greater improvements in all strength measures than the unsupervised group, with small, moderate and large differences between the groups\\x{2019} changes for chin-ups (9.1%; ± 11.6%), bench-press (16.9%; ± 11.7%) and box-squat (50.4%; ± 20.9%) estimated 1RM respectively. Both groups showed trivial increases in mass; however increases in fat free mass were small and trivial for supervised and unsupervised players respectively. Strength declined in the supervised group while the unsupervised group had small increases during the competition phase, resulting in only a small difference between the long-term changes in box-squat 1RM (15.9%; ± 13.2%). The supervised group had further small increases in fat free mass resulting in a small difference (2.4%; ± 2.7%) in the long-term changes. The postconditioning differences between the 2 groups may have been a result of increased adherence and the attainment of higher training loads during supervised training. The lack of differences in strength after the competition period indicates that supervision should be maintained to reduce substantial decrements in performance.
Wongchai, C; Chaidee, A; Pfeiffer, W
2012-01-01
Global warming increases plant salt stress via evaporation after irrigation, but how plant cells sense salt stress remains unknown. Here, we searched for correlation-based targets of salt stress sensing in Chenopodium rubrum cell suspension cultures. We proposed a linkage between the sensing of salt stress and the sensing of distinct metabolites. Consequently, we analysed various extracellular pH signals in autotroph and heterotroph cell suspensions. Our search included signals after 52 treatments: salt and osmotic stress, ion channel inhibitors (amiloride, quinidine), salt-sensing modulators (proline), amino acids, carboxylic acids and regulators (salicylic acid, 2,4-dichlorphenoxyacetic acid). Multivariate analyses revealed hirarchical clusters of signals and five principal components of extracellular proton flux. The principal component correlated with salt stress was an antagonism of γ-aminobutyric and salicylic acid, confirming involvement of acid-sensing ion channels (ASICs) in salt stress sensing. Proline, short non-substituted mono-carboxylic acids (C2-C6), lactic acid and amiloride characterised the four uncorrelated principal components of proton flux. The proline-associated principal component included an antagonism of 2,4-dichlorphenoxyacetic acid and a set of amino acids (hydrophobic, polar, acidic, basic). The five principal components captured 100% of variance of extracellular proton flux. Thus, a bias-free, functional high-throughput screening was established to extract new clusters of response elements and potential signalling pathways, and to serve as a core for quantitative meta-analysis in plant biology. The eigenvectors reorient research, associating proline with development instead of salt stress, and the proof of existence of multiple components of proton flux can help to resolve controversy about the acid growth theory. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Surzhikov, V D; Surzhikov, D V
2014-01-01
The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.
Priority of VHS Development Based in Potential Area using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Meirawan, D.; Ana, A.; Saripudin, S.
2018-02-01
The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.
Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S
2015-10-09
A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.
ERIC Educational Resources Information Center
National Association of Secondary School Principals, Reston, VA.
Preparation programs for principals should have excellent academic and performance based components. In examining the nature of performance based principal preparation this report finds that school administration programs must bridge the gap between conceptual learning in the classroom and the requirements of professional practice. A number of…
Principal component greenness transformation in multitemporal agricultural Landsat data
NASA Technical Reports Server (NTRS)
Abotteen, R. A.
1978-01-01
A data compression technique for multitemporal Landsat imagery which extracts phenological growth pattern information for agricultural crops is described. The principal component greenness transformation was applied to multitemporal agricultural Landsat data for information retrieval. The transformation was favorable for applications in agricultural Landsat data analysis because of its physical interpretability and its relation to the phenological growth of crops. It was also found that the first and second greenness eigenvector components define a temporal small-grain trajectory and nonsmall-grain trajectory, respectively.
ERIC Educational Resources Information Center
Amershi, Saleema; Conati, Cristina
2009-01-01
In this paper, we present a data-based user modeling framework that uses both unsupervised and supervised classification to build student models for exploratory learning environments. We apply the framework to build student models for two different learning environments and using two different data sources (logged interface and eye-tracking data).…
Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation
ERIC Educational Resources Information Center
Hinton, Geoffrey; Osindero, Simon; Welling, Max; Teh, Yee-Whye
2006-01-01
We describe a way of modeling high-dimensional data vectors by using an unsupervised, nonlinear, multilayer neural network in which the activity of each neuron-like unit makes an additive contribution to a global energy score that indicates how surprised the network is by the data vector. The connection weights that determine how the activity of…
ERIC Educational Resources Information Center
Protopapas, Athanassios; Skaloumbakas, Christos; Bali, Persefoni
2008-01-01
After reviewing past efforts related to computer-based reading disability (RD) assessment, we present a fully automated screening battery that evaluates critical skills relevant for RD diagnosis designed for unsupervised application in the Greek educational system. Psychometric validation in 301 children, 8-10 years old (grades 3 and 4; including…
Unsupervised classification of remote multispectral sensing data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The new unsupervised classification technique for classifying multispectral remote sensing data which can be either from the multispectral scanner or digitized color-separation aerial photographs consists of two parts: (a) a sequential statistical clustering which is a one-pass sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. Applications of the technique using an IBM-7094 computer on multispectral data sets over Purdue's Flight Line C-1 and the Yellowstone National Park test site have been accomplished. Comparisons between the classification maps by the unsupervised technique and the supervised maximum liklihood technique indicate that the classification accuracies are in agreement.
NASA Astrophysics Data System (ADS)
Serb, Alexander; Bill, Johannes; Khiat, Ali; Berdan, Radu; Legenstein, Robert; Prodromakis, Themis
2016-09-01
In an increasingly data-rich world the need for developing computing systems that cannot only process, but ideally also interpret big data is becoming continuously more pressing. Brain-inspired concepts have shown great promise towards addressing this need. Here we demonstrate unsupervised learning in a probabilistic neural network that utilizes metal-oxide memristive devices as multi-state synapses. Our approach can be exploited for processing unlabelled data and can adapt to time-varying clusters that underlie incoming data by supporting the capability of reversible unsupervised learning. The potential of this work is showcased through the demonstration of successful learning in the presence of corrupted input data and probabilistic neurons, thus paving the way towards robust big-data processors.
Pintus, M A; Gaspa, G; Nicolazzi, E L; Vicario, D; Rossoni, A; Ajmone-Marsan, P; Nardone, A; Dimauro, C; Macciotta, N P P
2012-06-01
The large number of markers available compared with phenotypes represents one of the main issues in genomic selection. In this work, principal component analysis was used to reduce the number of predictors for calculating genomic breeding values (GEBV). Bulls of 2 cattle breeds farmed in Italy (634 Brown and 469 Simmental) were genotyped with the 54K Illumina beadchip (Illumina Inc., San Diego, CA). After data editing, 37,254 and 40,179 single nucleotide polymorphisms (SNP) were retained for Brown and Simmental, respectively. Principal component analysis carried out on the SNP genotype matrix extracted 2,257 and 3,596 new variables in the 2 breeds, respectively. Bulls were sorted by birth year to create reference and prediction populations. The effect of principal components on deregressed proofs in reference animals was estimated with a BLUP model. Results were compared with those obtained by using SNP genotypes as predictors with either the BLUP or Bayes_A method. Traits considered were milk, fat, and protein yields, fat and protein percentages, and somatic cell score. The GEBV were obtained for prediction population by blending direct genomic prediction and pedigree indexes. No substantial differences were observed in squared correlations between GEBV and EBV in prediction animals between the 3 methods in the 2 breeds. The principal component analysis method allowed for a reduction of about 90% in the number of independent variables when predicting direct genomic values, with a substantial decrease in calculation time and without loss of accuracy. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L
2014-01-01
Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.
Sparse modeling of spatial environmental variables associated with asthma
Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.
2014-01-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437
Sparse modeling of spatial environmental variables associated with asthma.
Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W
2015-02-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.
Hua, Yang; Liu, Zhanqiang
2018-05-24
Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.
Principal component analysis for designed experiments.
Konishi, Tomokazu
2015-01-01
Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.
B. Desta Fekedulegn; J.J. Colbert; R.R., Jr. Hicks; Michael E. Schuckers
2002-01-01
The theory and application of principal components regression, a method for coping with multicollinearity among independent variables in analyzing ecological data, is exhibited in detail. A concrete example of the complex procedures that must be carried out in developing a diagnostic growth-climate model is provided. We use tree radial increment data taken from breast...
ERIC Educational Resources Information Center
Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad
2017-01-01
This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
Principal component analysis of Raman spectra for TiO2 nanoparticle characterization
NASA Astrophysics Data System (ADS)
Ilie, Alina Georgiana; Scarisoareanu, Monica; Morjan, Ion; Dutu, Elena; Badiceanu, Maria; Mihailescu, Ion
2017-09-01
The Raman spectra of anatase/rutile mixed phases of Sn doped TiO2 nanoparticles and undoped TiO2 nanoparticles, synthesised by laser pyrolysis, with nanocrystallite dimensions varying from 8 to 28 nm, was simultaneously processed with a self-written software that applies Principal Component Analysis (PCA) on the measured spectrum to verify the possibility of objective auto-characterization of nanoparticles from their vibrational modes. The photo-excited process of Raman scattering is very sensible to the material characteristics, especially in the case of nanomaterials, where more properties become relevant for the vibrational behaviour. We used PCA, a statistical procedure that performs eigenvalue decomposition of descriptive data covariance, to automatically analyse the sample's measured Raman spectrum, and to interfere the correlation between nanoparticle dimensions, tin and carbon concentration, and their Principal Component values (PCs). This type of application can allow an approximation of the crystallite size, or tin concentration, only by measuring the Raman spectrum of the sample. The study of loadings of the principal components provides information of the way the vibrational modes are affected by the nanoparticle features and the spectral area relevant for the classification.
Sebro, Ronnie; Hoffman, Thomas J.; Lange, Christoph; Rogus, John J.; Risch, Neil J.
2013-01-01
Population stratification leads to a predictable phenomenon—a reduction in the number of heterozygotes compared to that calculated assuming Hardy-Weinberg Equilibrium (HWE). We show that population stratification results in another phenomenon—an excess in the proportion of spouse-pairs with the same genotypes at all ancestrally informative markers, resulting in ancestrally related positive assortative mating. We use principal components analysis to show that there is evidence of population stratification within the Framingham Heart Study, and show that the first principal component correlates with a North-South European cline. We then show that the first principal component is highly correlated between spouses (r=0.58, p=0.0013), demonstrating that there is ancestrally related positive assortative mating among the Framingham Caucasian population. We also show that the single nucleotide polymorphisms loading most heavily on the first principal component show an excess of homozygotes within the spouses, consistent with similar ancestry-related assortative mating in the previous generation. This nonrandom mating likely affects genetic structure seen more generally in the North American population of European descent today, and decreases the rate of decay of linkage disequilibrium for ancestrally informative markers. PMID:20842694
Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C
2016-02-01
Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p < 0.05) difference in sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.
Mahler, Barbara J.
2008-01-01
The statistical analyses taken together indicate that the geochemistry at the freshwater-zone wells is more variable than that at the transition-zone wells. The geochemical variability at the freshwater-zone wells might result from dilution of ground water by meteoric water. This is indicated by relatively constant major ion molar ratios; a preponderance of positive correlations between SC, major ions, and trace elements; and a principal components analysis in which the major ions are strongly loaded on the first principal component. Much of the variability at three of the four transition-zone wells might result from the use of different laboratory analytical methods or reporting procedures during the period of sampling. This is reflected by a lack of correlation between SC and major ion concentrations at the transition-zone wells and by a principal components analysis in which the variability is fairly evenly distributed across several principal components. The statistical analyses further indicate that, although the transition-zone wells are less well connected to surficial hydrologic conditions than the freshwater-zone wells, there is some connection but the response time is longer.
Matsen IV, Frederick A.; Evans, Steven N.
2013-01-01
Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415
Time Management Ideas for Assistant Principals.
ERIC Educational Resources Information Center
Cronk, Jerry
1987-01-01
Prioritizing the use of time, effective communication, delegating authority, having detailed job descriptions, and good secretarial assistance are important components of time management for assistant principals. (MD)
McSherry, Wilfred
2006-07-01
The aim of this study was to generate a deeper understanding of the factors and forces that may inhibit or advance the concepts of spirituality and spiritual care within both nursing and health care. This manuscript presents a model that emerged from a qualitative study using grounded theory. Implementation and use of this model may assist all health care practitioners and organizations to advance the concepts of spirituality and spiritual care within their own sphere of practice. The model has been termed the principal components model because participants identified six components as being crucial to the advancement of spiritual health care. Grounded theory was used meaning that there was concurrent data collection and analysis. Theoretical sampling was used to develop the emerging theory. These processes, along with data analysis, open, axial and theoretical coding led to the identification of a core category and the construction of the principal components model. Fifty-three participants (24 men and 29 women) were recruited and all consented to be interviewed. The sample included nurses (n=24), chaplains (n=7), a social worker (n=1), an occupational therapist (n=1), physiotherapists (n=2), patients (n=14) and the public (n=4). The investigation was conducted in three phases to substantiate the emerging theory and the development of the model. The principal components model contained six components: individuality, inclusivity, integrated, inter/intra-disciplinary, innate and institution. A great deal has been written on the concepts of spirituality and spiritual care. However, rhetoric alone will not remove some of the intrinsic and extrinsic barriers that are inhibiting the advancement of the spiritual dimension in terms of theory and practice. An awareness of and adherence to the principal components model may assist nurses and health care professionals to engage with and overcome some of the structural, organizational, political and social variables that are impacting upon spiritual care.
Purcaro, Giorgia; Cordero, Chiara; Liberto, Erica; Bicchi, Carlo; Conte, Lanfranco S
2014-03-21
This study investigates the applicability of an iterative approach aimed at defining a chemical blueprint of virgin olive oil volatiles to be correlated to the product sensory quality. The investigation strategy proposed allows to fully exploit the informative content of a comprehensive multidimensional gas chromatography (GC×GC) coupled to a mass spectrometry (MS) data set. Olive oil samples (19), including 5 reference standards, obtained from the International Olive Oil Council, and commercial samples, were submitted to a sensory evaluation by a Panel test, before being analyzed in two laboratories using different instrumentation, column set, and software elaboration packages in view of a cross-validation of the entire methodology. A first classification of samples based on untargeted peak features information, was obtained on raw data from two different column combinations (apolar×polar and polar×apolar) by applying unsupervised multivariate analysis (i.e., principal component analysis-PCA). However, to improve effectiveness and specificity of this classification, peak features were reliably identified (261 compounds), on the basis of the MS spectrum and linear retention index matching, and subjected to successive pair-wise comparisons based on 2D patterns, which revealed peculiar distribution of chemicals correlated with samples sensory classification. The most informative compounds were thus identified and collected in a "blueprint" of specific defects (or combination of defects) successively adopted to discriminate Extra Virgin from defected oils (i.e., lampante oil) with the aid of a supervised approach, i.e., partial least squares-discriminant analysis (PLS-DA). In this last step, the principles of sensomics, which assigns higher information potential to analytes with lower odor threshold proved to be successful, and a much more powerful discrimination of samples was obtained in view of a sensory quality assessment. Copyright © 2014 Elsevier B.V. All rights reserved.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
Peng, Wenjing; Zhang, Yu; Zhu, Rui; Mechref, Yehia
2017-09-01
Breast cancer is the leading type of cancer in women. Breast cancer brain metastasis is currently considered an issue of concern among breast cancer patients. Membrane proteins play important roles in breast cancer brain metastasis, involving cell adhesion and penetration of blood-brain barrier. To understand the mechanism of breast cancer brain metastasis, liquid chromatography-tandem mass spectrometry (LC-MS/MS) was employed in conjunction with enrichment of membrane proteins to analyze the proteomes from five different breast cancer and a brain cancer cell lines. Quantitative proteomic data of all cell lines were compared with MDA-MB-231BR which is a brain seeking breast cancer cell line, thus representing brain metastasis characteristics. Label-free proteomics of the six cell lines facilitates the identification of 1238 proteins and the quantification of 899 proteins of which more than 70% were membrane proteins. Unsupervised principal component analysis (PCA) of the label-free proteomics data resulted in a distinct clustering of cell lines, suggesting quantitative differences in the expression of several proteins among the different cell lines. Unique protein expressions in 231BR were observed for 28 proteins. The up-regulation of STAU1, AT1B3, NPM1, hnRNP Q, and hnRNP K and the down-regulation of TUBB4B and TUBB5 were noted in 231BR relative to 231 (precursor cell lines from which 231BR is derived). These proteins might contribute to the breast cancer brain metastasis. Ingenuity pathway analysis (IPA) supported the great brain metastatic propensity of 231BR and suggested the importance of the up-regulation of integrin proteins and down-regulation of EPHA2 in brain metastasis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rai, Amit; Kamochi, Hidetaka; Suzuki, Hideyuki; Nakamura, Michimi; Takahashi, Hiroki; Hatada, Tomoki; Saito, Kazuki; Yamazaki, Mami
2017-01-01
Lonicera japonica is one of the most important medicinal plants with applications in traditional Chinese and Japanese medicine for thousands of years. Extensive studies on the constituents of L. japonica extracts have revealed an accumulation of pharmaceutically active metabolite classes, such as chlorogenic acid, luteolin and other flavonoids, and secoiridoids, which impart characteristic medicinal properties. Despite being a rich source of pharmaceutically active metabolites, little is known about the biosynthetic enzymes involved, and their expression profile across different tissues of L. japonica. In this study, we performed de novo transcriptome assembly for L. japonica, representing transcripts from nine different tissues. A total of 22 Gbps clean RNA-seq reads from nine tissues of L. japonica were used, resulting in 243,185 unigenes, with 99,938 unigenes annotated based on a homology search using blastx against the NCBI-nr protein database. Unsupervised principal component analysis and correlation studies using transcript expression data from all nine tissues of L. japonica showed relationships between tissues, explaining their association at different developmental stages. Homologs for all genes associated with chlorogenic acid, luteolin, and secoiridoid biosynthesis pathways were identified in the L. japonica transcriptome assembly. Expression of unigenes associated with chlorogenic acid was enriched in stems and leaf-2, unigenes from luteolin were enriched in stems and flowers, while unigenes from secoiridoid metabolic pathways were enriched in leaf-1 and shoot apex. Our results showed that different tissues of L. japonica are enriched with sets of unigenes associated with specific pharmaceutically important metabolic pathways and, therefore, possess unique medicinal properties. The present study will serve as a resource for future attempts for functional characterization of enzyme coding genes within key metabolic processes.
Farag, Mohamed A; Rasheed, Dalia M; Kropf, Matthias; Heiss, Andreas G
2016-11-01
Trigonella foenum-graecum is a plant of considerable value for its nutritive composition as well as medicinal effects. This study aims to examine Trigonella seeds using a metabolome-based ultra-performance liquid chromatography-mass spectrometry (UPLC-MS) in parallel to gas chromatography-mass spectrometry (GC-MS) coupled with multivariate data analyses. The metabolomic differences of seeds derived from three Trigonella species, i.e., T. caerulea, T. corniculata, and T. foenum-graecum, were assessed. Under specified conditions, we were able to identify 93 metabolites including 5 peptides, 2 phenolic acids, 22 C/O-flavonoid conjugates, 26 saponins, and 9 fatty acids using UPLC-MS. Several novel dipeptides, saponins, and flavonoids were found in Trigonella herein for the first time. Samples were classified via unsupervised principal component analysis (PCA) followed by supervised orthogonal projection to latent structures-discriminant analysis (OPLS-DA). A distinct separation among the investigated Trigonella species was revealed, with T. foenum-graecum samples found most enriched in apigenin-C-glycosides, viz. vicenins 1/3 and 2, compared to the other two species. In contrast to UPLC-MS, GC-MS was less efficient to classify specimens, with differences among specimens mostly attributed to fatty acyl esters. GC-MS analysis of Trigonella seed extracts led to the identification of 91 metabolites belonging mostly to fatty acyl esters, free fatty acids followed by organic acids, sugars, and amino acids. This study presents the first report on primary and secondary metabolite compositional differences among Trigonella seeds via a metabolomics approach and reveals that, among the species examined, the official T. foenum-graecum presents a better source of Trigonella secondary bioactive metabolites.
Leonardi, Nora; Shirer, William R; Greicius, Michael D; Van De Ville, Dimitri
2014-12-01
Resting-state functional connectivity (FC) is highly variable across the duration of a scan. Groups of coevolving connections, or reproducible patterns of dynamic FC (dFC), have been revealed in fluctuating FC by applying unsupervised learning techniques. Based on results from k-means clustering and sliding-window correlations, it has recently been hypothesized that dFC may cycle through several discrete FC states. Alternatively, it has been proposed to represent dFC as a linear combination of multiple FC patterns using principal component analysis. As it is unclear whether sparse or nonsparse combinations of FC patterns are most appropriate, and as this affects their interpretation and use as markers of cognitive processing, the goal of our study was to evaluate the impact of sparsity by performing an empirical evaluation of simulated, task-based, and resting-state dFC. To this aim, we applied matrix factorizations subject to variable constraints in the temporal domain and studied both the reproducibility of ensuing representations of dFC and the expression of FC patterns over time. During subject-driven tasks, dFC was well described by alternating FC states in accordance with the nature of the data. The estimated FC patterns showed a rich structure with combinations of known functional networks enabling accurate identification of three different tasks. During rest, dFC was better described by multiple FC patterns that overlap. The executive control networks, which are critical for working memory, appeared grouped alternately with externally or internally oriented networks. These results suggest that combinations of FC patterns can provide a meaningful way to disentangle resting-state dFC. © 2014 The Authors. Human Brain Mapping published by Wiley Periodicals, Inc.
The stable traits of melanoma genetics: an alternate approach to target discovery
2012-01-01
Background The weight that gene copy number plays in transcription remains controversial; although in specific cases gene expression correlates with copy number, the relationship cannot be inferred at the global level. We hypothesized that genes steadily expressed by 15 melanoma cell lines (CMs) and their parental tissues (TMs) should be critical for oncogenesis and their expression most frequently influenced by their respective copy number. Results Functional interpretation of 3,030 transcripts concordantly expressed (Pearson's correlation coefficient p-value < 0.05) by CMs and TMs confirmed an enrichment of functions crucial to oncogenesis. Among them, 968 were expressed according to the transcriptional efficiency predicted by copy number analysis (Pearson's correlation coefficient p-value < 0.05). We named these genes, "genomic delegates" as they represent at the transcriptional level the genetic footprint of individual cancers. We then tested whether the genes could categorize 112 melanoma metastases. Two divergent phenotypes were observed: one with prevalent expression of cancer testis antigens, enhanced cyclin activity, WNT signaling, and a Th17 immune phenotype (Class A). This phenotype expressed, therefore, transcripts previously associated to more aggressive cancer. The second class (B) prevalently expressed genes associated with melanoma signaling including MITF, melanoma differentiation antigens, and displayed a Th1 immune phenotype associated with better prognosis and likelihood to respond to immunotherapy. An intermediate third class (C) was further identified. The three phenotypes were confirmed by unsupervised principal component analysis. Conclusions This study suggests that clinically relevant phenotypes of melanoma can be retraced to stable oncogenic properties of cancer cells linked to their genetic back bone, and offers a roadmap for uncovering novel targets for tailored anti-cancer therapy. PMID:22537248
Polychromatic plots: graphical display of multidimensional data.
Roederer, Mario; Moody, M Anthony
2008-09-01
Limitations of graphical displays as well as human perception make the presentation and analysis of multidimensional data challenging. Graphical display of information on paper or by current projectors is perforce limited to two dimensions; the encoding of information from other dimensions must be overloaded into the two physical dimensions. A number of alternative means of encoding this information have been implemented, such as offsetting data points at an angle (e.g., three-dimensional projections onto a two-dimensional surface) or generating derived parameters that are combinations of other variables (e.g., principal components). Here, we explore the use of color to encode additional dimensions of data. PolyChromatic Plots are standard dot plots, where the color of each event is defined by the values of one, two, or three of the measurements for that event. The measurements for these parameters are mapped onto an intensity value for each primary color (red, green, or blue) based on different functions. In addition, differential weighting of the priority with which overlapping events are displayed can be defined by these same measurements. PolyChromatic Plots can encode up to five independent dimensions of data in a single display. By altering the color mapping function and the priority function, very different displays that highlight or de-emphasize populations of events can be generated. As for standard black-and-white dot plots, frequency information can be significantly biased by this display; care must be taken to ensure appropriate interpretation of the displays. PolyChromatic Plots are a powerful display type that enables rapid data exploration. By virtue of encoding as many as five dimensions of data independently, an enormous amount of information can be gleaned from the displays. In many ways, the display performs somewhat like an unsupervised cluster algorithm, by highlighting events of similar distributions in multivariate space.
Efficient hyperspectral image segmentation using geometric active contour formulation
NASA Astrophysics Data System (ADS)
Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.
2014-10-01
In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit
2015-01-01
Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit
2015-01-01
Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.
A Series of MATLAB Learning Modules to Enhance Numerical Competency in Applied Marine Sciences
NASA Astrophysics Data System (ADS)
Fischer, A. M.; Lucieer, V.; Burke, C.
2016-12-01
Enhanced numerical competency to navigate the massive data landscapes are critical skills students need to effectively explore, analyse and visualize complex patterns in high-dimensional data for addressing the complexity of many of the world's problems. This is especially the case for interdisciplinary, undergraduate applied marine science programs, where students are required to demonstrate competency in methods and ideas across multiple disciplines. In response to this challenge, we have developed a series of repository-based data exploration, analysis and visualization modules in MATLAB for integration across various attending and online classes within the University of Tasmania. The primary focus of these modules is to teach students to collect, aggregate and interpret data from large on-line marine scientific data repositories to, 1) gain technical skills in discovering, accessing, managing and visualising large, numerous data sources, 2) interpret, analyse and design approaches to visualise these data, and 3) to address, through numerical approaches, complex, real-world problems, that the traditional scientific methods cannot address. All modules, implemented through a MATLAB live script, include a short recorded lecture to introduce the topic, a handout that gives an overview of the activities, an instructor's manual with a detailed methodology and discussion points, a student assessment (quiz and level-specific challenge task), and a survey. The marine science themes addressed through these modules include biodiversity, habitat mapping, algal blooms and sea surface temperature change and utilize a series of marine science and oceanographic data portals. Through these modules students, with minimal experience in MATLAB or numerical methods are introduced to array indexing, concatenation, sorting, and reshaping, principal component analysis, spectral analysis and unsupervised classification within the context of oceanographic processes, marine geology and marine community ecology.
Izquierdo-Garcia, Jose L; Nin, Nicolas; Jimenez-Clemente, Jorge; Horcajada, Juan P; Arenas-Miras, Maria Del Mar; Gea, Joaquim; Esteban, Andres; Ruiz-Cabello, Jesus; Lorente, Jose A
2017-12-29
The integrated analysis of changes in the metabolic profile could be critical for the discovery of biomarkers of lung injury, and also for generating new pathophysiological hypotheses and designing novel therapeutic targets for the acute respiratory distress syndrome (ARDS). This study aimed at developing a Nuclear Magnetic Resonance (NMR)-based approach for the identification of the metabolomic profile of ARDS in patients with H1N1 influenza virus pneumonia. Serum samples from 30 patients (derivation set) diagnosed of H1N1 influenza virus pneumonia were analysed by unsupervised Principal Component Analysis (PCA) to identify metabolic differences between patients with and without ARDS by NMR-spectroscopy. A predictive model of partial least squares discriminant analysis (PLS-DA) was developed for the identification of ARDS. PLS-DA was trained with the derivation set and tested in another set of samples from 26 patients also diagnosed of H1N1 influenza virus pneumonia (validation set). Decreased serum glucose, alanine, glutamine, methylhistidine and fatty acids concentrations, and elevated serum phenylalanine and methylguanidine concentrations, discriminated patients with ARDS versus patients without ARDS. PLS-DA model successfully identified the presence of ARDS in the validation set with a success rate of 92% (sensitivity 100% and specificity 91%). The classification functions showed a good correlation with the Sequential Organ Failure Assessment (SOFA) score (R = 0.74, p < 0.0001) and the Pa02/Fi02 ratio (R = 0.41, p = 0.03). The serum metabolomic profile is sensitive and specific to identify ARDS in patients with H1N1 influenza A pneumonia. Future studies are needed to determine the role of NMR-spectroscopy as a biomarker of ARDS.
Correlation of gene expression with bladder capacity in interstitial cystitis/bladder pain syndrome.
Colaco, Marc; Koslov, David S; Keys, Tristan; Evans, Robert J; Badlani, Gopal H; Andersson, Karl-Erik; Walker, Stephen J
2014-10-01
Interstitial cystitis and bladder pain syndrome are terms used to describe a heterogeneous chronic pelvic and bladder pain disorder. Despite its significant prevalence, our understanding of disease etiology is poor. We molecularly characterized interstitial cystitis/bladder pain syndrome and determined whether there are clinical factors that correlate with gene expression. Bladder biopsies from female subjects with interstitial cystitis/bladder pain syndrome and female controls without signs of the disease were collected and divided into those with normal and low anesthetized bladder capacity, respectively. Samples then underwent RNA extraction and microarray assay. Data generated by these assays were analyzed using Omics Explorer (Qlucore, Lund, Sweden), GeneSifter® Analysis Edition 4.0 and Ingenuity® Pathway Analysis to determine similarity among samples within and between groups, and measure differentially expressed transcripts unique to each phenotype. A total of 16 subjects were included in study. Principal component analysis and unsupervised hierarchical clustering showed clear separation between gene expression in tissues from subjects with low compared to normal bladder capacity. Gene expression in tissue from patients with interstitial cystitis/bladder pain syndrome who had normal bladder capacity did not significantly differ from that in controls without interstitial cystitis/bladder pain syndrome. Pairwise analysis revealed that pathways related to inflammatory and immune response were most involved. Microarray analysis provides insight into the potential pathological condition underlying interstitial cystitis/bladder pain syndrome. This pilot study shows that patients with this disorder who have low compared to normal bladder capacity have significantly different molecular characteristics, which may reflect a difference in disease pathophysiology. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Dill, Allison L.; Ifa, Demian R.; Manicke, Nicholas E.; Costa, Anthony B.; Ramos-Vara, José A.; Knapp, Deborah W.; Cooks, R. Graham
2009-01-01
Desorption electrospray ionization (DESI) mass spectrometry (MS) was used in an imaging mode to interrogate the lipid profiles of thin tissue sections of canine spontaneous invasive transitional cell carcinoma (TCC) of the urinary bladder (a model of human invasive bladder cancer) as well as adjacent normal tissue from four different dogs. The glycerophospholipids and sphingolipids that appear as intense signals in both the negative ion and positive ion modes were identified by tandem mass spectrometry (MS/MS) product ion scans using collision-induced dissociation. Differences in the relative distributions of the lipid species were present between the tumor and adjacent normal tissue in both the negative and positive ion modes. DESI-MS images showing the spatial distributions of particular glycerophospholipids, sphinoglipids and free fatty acids in both the negative and positive ion modes were compared to serial tissue sections that were stained with hematoxylin and eosin (H&E). Increased absolute and relative intensities for at least five different glycerophospholipids and three free fatty acids in the negative ion mode and at least four different lipid species in the positive ion mode were seen in the tumor region of the samples in all four dogs. In addition, one sphingolipid species exhibited increased signal intensity in the positive ion mode in normal tissue relative to the diseased tissue. Principal component analysis (PCA) was also used to generate unsupervised statistical images from the negative ion mode data and these images are in excellent agreement with the DESI images obtained from the selected ions and also the H&E stained tissue PMID:19810710
Shang, Zhiwei; Li, Hongwen
2017-10-01
Vitiligo is an acquired skin disease with pigmentary disorder. Autoimmune destruction of melanocytes is thought to be major factor in the etiology of vitiligo. miRNA-based regulators of gene expression have been reported to play crucial roles in autoimmune disease. Therefore, we attempt to profile the miRNA expressions and predict their potential targets, assessing the biological functions of differentially expressed miRNA. Total RNA was extracted from peripheral blood of vitiligo (experimental group, n = 5) and non-vitiligo (control group, n = 5) age-matched patients. Samples were hybridized to a miRNA array. Box, scatter and principal component analysis plots were performed, followed by unsupervised hierarchical clustering analysis to classify the samples. Quantitative reverse transcription polymerase chain reaction (RT-PCR) was conducted for validation of microarray data. Three different databases, TargetScan, PITA and microRNA.org, were used to predict the potential target genes. Gene ontology (GO) annotation and pathway analysis were performed to assess the potential functions of predicted genes of identified miRNA. A total of 100 (29 upregulated and 71 downregulated) miRNA were filtered by volcano plot analysis. Four miRNA were validated by quantitative RT-PCR as significantly downregulated in the vitiligo group. The functions of predicted target genes associated with differentially expressed miRNA were assessed by GO analysis, showing that the GO term with most significantly enriched target genes was axon guidance, and that the axon guidance pathway was most significantly correlated with these miRNA. In conclusion, we identified four downregulated miRNA in vitiligo and assessed the potential functions of target genes related to these differentially expressed miRNA. © 2017 Japanese Dermatological Association.
Zhou, Guisheng; Wang, Mengyue; Li, Yang; Peng, Ying; Li, Xiaobo
2015-08-01
In the present study, a new strategy based on chemical analysis and chemometrics methods was proposed for the comprehensive analysis and profiling of underivatized free amino acids (FAAs) and small peptides among various Luo-Han-Guo (LHG) samples. Firstly, the ultrasound-assisted extraction (UAE) parameters were optimized using Plackett-Burman (PB) screening and Box-Behnken designs (BBD), and the following optimal UAE conditions were obtained: ultrasound power of 280 W, extraction time of 43 min, and the solid-liquid ratio of 302 mL/g. Secondly, a rapid and sensitive analytical method was developed for simultaneous quantification of 24 FAAs and 3 active small peptides in LHG at trace levels using hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole linear ion-trap tandem mass spectrometry (HILIC-UHPLC-QTRAP(®)/MS(2)). The analytical method was validated by matrix effects, linearity, LODs, LOQs, precision, repeatability, stability, and recovery. Thirdly, the proposed optimal UAE conditions and analytical methods were applied to measurement of LHG samples. It was shown that LHG was rich in essential amino acids, which were beneficial nutrient substances for human health. Finally, based on the contents of the 27 analytes, the chemometrics methods of unsupervised principal component analysis (PCA) and supervised counter propagation artificial neural network (CP-ANN) were applied to differentiate and classify the 40 batches of LHG samples from different cultivated forms, regions, and varieties. As a result, these samples were mainly clustered into three clusters, which illustrated the cultivating disparity among the samples. In summary, the presented strategy had potential for the investigation of edible plants and agricultural products containing FAAs and small peptides.
Wongchenko, Matthew J; McArthur, Grant A; Dréno, Brigitte; Larkin, James; Ascierto, Paolo A; Sosman, Jeffrey; Andries, Luc; Kockx, Mark; Hurst, Stephen D; Caro, Ivor; Rooney, Isabelle; Hegde, Priti S; Molinero, Luciana; Yue, Huibin; Chang, Ilsung; Amler, Lukas; Yan, Yibing; Ribas, Antoni
2017-09-01
Purpose: The association of tumor gene expression profiles with progression-free survival (PFS) outcomes in patients with BRAF V600 -mutated melanoma treated with vemurafenib or cobimetinib combined with vemurafenib was evaluated. Experimental Design: Gene expression of archival tumor samples from patients in four trials (BRIM-2, BRIM-3, BRIM-7, and coBRIM) was evaluated. Genes significantly associated with PFS ( P < 0.05) were identified by univariate Cox proportional hazards modeling, then subjected to unsupervised hierarchical clustering, principal component analysis, and recursive partitioning to develop optimized gene signatures. Results: Forty-six genes were identified as significantly associated with PFS in both BRIM-2 ( n = 63) and the vemurafenib arm of BRIM-3 ( n = 160). Two distinct signatures were identified: cell cycle and immune. Among vemurafenib-treated patients, the cell-cycle signature was associated with shortened PFS compared with the immune signature in the BRIM-2/BRIM-3 training set [hazard ratio (HR) 1.8; 95% confidence interval (CI), 1.3-2.6, P = 0.0001] and in the coBRIM validation set ( n = 101; HR, 1.6; 95% CI, 1.0-2.5; P = 0.08). The adverse impact of the cell-cycle signature on PFS was not observed in patients treated with cobimetinib combined with vemurafenib ( n = 99; HR, 1.1; 95% CI, 0.7-1.8; P = 0.66). Conclusions: In vemurafenib-treated patients, the cell-cycle gene signature was associated with shorter PFS. However, in cobimetinib combined with vemurafenib-treated patients, both cell cycle and immune signature subgroups had comparable PFS. Cobimetinib combined with vemurafenib may abrogate the adverse impact of the cell-cycle signature. Clin Cancer Res; 23(17); 5238-45. ©2017 AACR . ©2017 American Association for Cancer Research.
Face recognition using an enhanced independent component analysis approach.
Kwak, Keun-Chang; Pedrycz, Witold
2007-03-01
This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions
NASA Astrophysics Data System (ADS)
BoŻek, Piotr
2018-03-01
The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.
Analysis and improvement measures of flight delay in China
NASA Astrophysics Data System (ADS)
Zang, Yuhang
2017-03-01
Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
NASA Astrophysics Data System (ADS)
Haneishi, Hideaki; Sakuda, Yasunori; Honda, Toshio
2002-06-01
Spectral reflectance of most reflective objects such as natural objects and color hardcopy is relatively smooth and can be approximated by several numbers of principal components with high accuracy. Though the subspace spanned by those principal components represents a space in which reflective objects can exist, it dos not provide the bound in which the samples distribute. In this paper we propose to represent the gamut of reflective objects in more distinct form, i.e., as a polyhedron in the subspace spanned by several principal components. Concept of the polyhedral gamut representation and its application to calculation of metamer ensemble are described. Color-mismatch volume caused by different illuminant and/or observer for a metamer ensemble is also calculated and compared with theoretical one.
Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis
NASA Astrophysics Data System (ADS)
Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei
2018-01-01
In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.
Online signature recognition using principal component analysis and artificial neural network
NASA Astrophysics Data System (ADS)
Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan
2016-12-01
In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.
Kim, Eun-Young; Kim, Suhn-Yeop; Oh, Duck-Won
2012-02-01
To investigate the effect of supervised and unsupervised pelvic floor muscle exercises utilizing trunk stabilization for treating postpartum urinary incontinence and to compare the outcomes. Randomized, single-blind controlled study. Outpatient rehabilitation hospital. Eighteen subjects with postpartum urinary incontinence. Subjects were randomized to either a supervised training group with verbal instruction from a physiotherapist, or an unsupervised training group after undergoing a supervised demonstration session. Bristol Female Lower Urinary Tract Symptom questionnaire (urinary symptoms and quality of life) and vaginal function test (maximal vaginal squeeze pressure and holding time) using a perineometer. The change values for urinary symptoms (-27.22 ± 6.20 versus -18.22 ± 5.49), quality of life (-5.33 ± 2.96 versus -1.78 ± 3.93), total score (-32.56 ± 8.17 versus -20.00 ± 6.67), maximal vaginal squeeze pressure (18.96 ± 9.08 versus 2.67 ± 3.64 mmHg), and holding time (11.32 ± 3.17 versus 5.72 ± 2.29 seconds) were more improved in the supervised group than in the unsupervised group (P < 0.05). In the supervised group, significant differences were found for all variables between pre- and post-test values (P < 0.01), whereas the unsupervised group showed significant differences for urinary symptom score, total score and holding time between the pre- and post-test results (P < 0.05). These findings suggest that exercising the pelvic floor muscles by utilizing trunk stabilization under physiotherapist supervision may be beneficial for the management of postpartum urinary incontinence.
Jesse, Stephen; Kalinin, Sergei V
2009-02-25
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
The Artistic Nature of the High School Principal.
ERIC Educational Resources Information Center
Ritschel, Robert E.
The role of high school principals can be compared to that of composers of music. For instance, composers put musical components together into a coherent whole; similarly, principals organize high schools by establishing class schedules, assigning roles to subordinates, and maintaining a safe and orderly learning environment. Second, composers…
ERIC Educational Resources Information Center
Odegard-Koester, Melissa A.; Watkins, Paul
2016-01-01
The working relationship between principals and school counselors have received some attention in the literature, however, little empirical research exists that examines specifically the components that facilitate a collaborative working relationship between the principal and school counselor. This qualitative case study examined the unique…
The Retention and Attrition of Catholic School Principals
ERIC Educational Resources Information Center
Durow, W. Patrick; Brock, Barbara L.
2004-01-01
This article reports the results of a study of the retention of principals in Catholic elementary and secondary schools in one Midwestern diocese. Findings revealed that personal needs, career advancement, support from employer, and clearly defined role expectations were key factors in principals' retention decisions. A profile of components of…
ERIC Educational Resources Information Center
Siennick, Sonja E.; Osgood, D. Wayne
2012-01-01
Companions are central to explanations of the risky nature of unstructured and unsupervised socializing, yet we know little about whom adolescents are with when hanging out. We examine predictors of how often friendship dyads hang out via multilevel analyses of longitudinal friendship-level data on over 5,000 middle schoolers. Adolescents hang out…
Teacher and learner: Supervised and unsupervised learning in communities.
Shafto, Michael G; Seifert, Colleen M
2015-01-01
How far can teaching methods go to enhance learning? Optimal methods of teaching have been considered in research on supervised and unsupervised learning. Locally optimal methods are usually hybrids of teaching and self-directed approaches. The costs and benefits of specific methods have been shown to depend on the structure of the learning task, the learners, the teachers, and the environment.
NASA Astrophysics Data System (ADS)
Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.
2015-10-01
In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.
Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh
2008-04-01
This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.
NASA Astrophysics Data System (ADS)
Salman, S. S.; Abbas, W. A.
2018-05-01
The goal of the study is to support analysis Enhancement of Resolution and study effect on classification methods on bands spectral information of specific and quantitative approaches. In this study introduce a method to enhancement resolution Landsat 8 of combining the bands spectral of 30 meters resolution with panchromatic band 8 of 15 meters resolution, because of importance multispectral imagery to extracting land - cover. Classification methods used in this study to classify several lands -covers recorded from OLI- 8 imagery. Two methods of Data mining can be classified as either supervised or unsupervised. In supervised methods, there is a particular predefined target, that means the algorithm learn which values of the target are associated with which values of the predictor sample. K-nearest neighbors and maximum likelihood algorithms examine in this work as supervised methods. In other hand, no sample identified as target in unsupervised methods, the algorithm of data extraction searches for structure and patterns between all the variables, represented by Fuzzy C-mean clustering method as one of the unsupervised methods, NDVI vegetation index used to compare the results of classification method, the percent of dense vegetation in maximum likelihood method give a best results.
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.
Goldstein, Markus; Uchida, Seiichi
2016-01-01
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.
Edwards, Darren J; Wood, Rodger
2016-01-01
This study explored over-selectivity (executive dysfunction) using a standard unsupervised categorization task. Over-selectivity has been demonstrated using supervised categorization procedures (where training is given); however, little has been done in the way of unsupervised categorization (without training). A standard unsupervised categorization task was used to assess levels of over-selectivity in a traumatic brain injury (TBI) population. Individuals with TBI were selected from the Tertiary Traumatic Brain Injury Clinic at Swansea University and were asked to categorize two-dimensional items (pictures on cards), into groups that they felt were most intuitive, and without any learning (feedback from experimenter). This was compared against categories made by a control group for the same task. The findings of this study demonstrate that individuals with TBI had deficits for both easy and difficult categorization sets, as indicated by a larger amount of one-dimensional sorting compared to control participants. Deficits were significantly greater for the easy condition. The implications of these findings are discussed in the context of over-selectivity, and the processes that underlie this deficit. Also, the implications for using this procedure as a screening measure for over-selectivity in TBI are discussed.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Segmentation of fluorescence microscopy cell images using unsupervised mining.
Du, Xian; Dua, Sumeet
2010-05-28
The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.
Lacroix, André; Hortobágyi, Tibor; Beurskens, Rainer; Granacher, Urs
2017-11-01
Balance and resistance training can improve healthy older adults' balance and muscle strength. Delivering such exercise programs at home without supervision may facilitate participation for older adults because they do not have to leave their homes. To date, no systematic literature analysis has been conducted to determine if supervision affects the effectiveness of these programs to improve healthy older adults' balance and muscle strength/power. The objective of this systematic review and meta-analysis was to quantify the effectiveness of supervised vs. unsupervised balance and/or resistance training programs on measures of balance and muscle strength/power in healthy older adults. In addition, the impact of supervision on training-induced adaptive processes was evaluated in the form of dose-response relationships by analyzing randomized controlled trials that compared supervised with unsupervised trials. A computerized systematic literature search was performed in the electronic databases PubMed, Web of Science, and SportDiscus to detect articles examining the role of supervision in balance and/or resistance training in older adults. The initially identified 6041 articles were systematically screened. Studies were included if they examined balance and/or resistance training in adults aged ≥65 years with no relevant diseases and registered at least one behavioral balance (e.g., time during single leg stance) and/or muscle strength/power outcome (e.g., time for 5-Times-Chair-Rise-Test). Finally, 11 studies were eligible for inclusion in this meta-analysis. Weighted mean standardized mean differences between subjects (SMD bs ) of supervised vs. unsupervised balance/resistance training studies were calculated. The included studies were coded for the following variables: number of participants, sex, age, number and type of interventions, type of balance/strength tests, and change (%) from pre- to post-intervention values. Additionally, we coded training according to the following modalities: period, frequency, volume, modalities of supervision (i.e., number of supervised/unsupervised sessions within the supervised or unsupervised training groups, respectively). Heterogeneity was computed using I 2 and χ 2 statistics. The methodological quality of the included studies was evaluated using the Physiotherapy Evidence Database scale. Our analyses revealed that in older adults, supervised balance/resistance training was superior compared with unsupervised balance/resistance training in improving measures of static steady-state balance (mean SMD bs = 0.28, p = 0.39), dynamic steady-state balance (mean SMD bs = 0.35, p = 0.02), proactive balance (mean SMD bs = 0.24, p = 0.05), balance test batteries (mean SMD bs = 0.53, p = 0.02), and measures of muscle strength/power (mean SMD bs = 0.51, p = 0.04). Regarding the examined dose-response relationships, our analyses showed that a number of 10-29 additional supervised sessions in the supervised training groups compared with the unsupervised training groups resulted in the largest effects for static steady-state balance (mean SMD bs = 0.35), dynamic steady-state balance (mean SMD bs = 0.37), and muscle strength/power (mean SMD bs = 1.12). Further, ≥30 additional supervised sessions in the supervised training groups were needed to produce the largest effects on proactive balance (mean SMD bs = 0.30) and balance test batteries (mean SMD bs = 0.77). Effects in favor of supervised programs were larger for studies that did not include any supervised sessions in their unsupervised programs (mean SMD bs : 0.28-1.24) compared with studies that implemented a few supervised sessions in their unsupervised programs (e.g., three supervised sessions throughout the entire intervention program; SMD bs : -0.06 to 0.41). The present findings have to be interpreted with caution because of the low number of eligible studies and the moderate methodological quality of the included studies, which is indicated by a median Physiotherapy Evidence Database scale score of 5. Furthermore, we indirectly compared dose-response relationships across studies and not from single controlled studies. Our analyses suggest that supervised balance and/or resistance training improved measures of balance and muscle strength/power to a greater extent than unsupervised programs in older adults. Owing to the small number of available studies, we were unable to establish a clear dose-response relationship with regard to the impact of supervision. However, the positive effects of supervised training are particularly prominent when compared with completely unsupervised training programs. It is therefore recommended to include supervised sessions (i.e., two out of three sessions/week) in balance/resistance training programs to effectively improve balance and muscle strength/power in older adults.
ERIC Educational Resources Information Center
Lawson, J. S.; Inglis, James
1984-01-01
A learning disability index (LDI) for the assessment of intellectual deficits on the Wechsler Intelligence Scale for Children-Revised (WISC-R) is described. The Factor II score coefficients derived from an unrotated principal components analysis of the WISC-R normative data, in combination with the individual's scaled scores, are used for this…
Greene, Kathryn; Banerjee, Smita C
2009-04-01
This study explored the association between unsupervised time with peers and adolescent smoking behavior both directly and indirectly through interaction with delinquent peers, social expectancies about cigarette smoking, and cigarette offers from peers. A cross-sectional survey was used for the study and included 248 male and female middle school students. Results of structural equation modeling revealed that unsupervised time with peers is associated indirectly with adolescent smoking behavior through the mediation of association with delinquent peers, social expectancies about cigarette smoking, and cigarette offers from peers. Interventions designed to motivate adolescents without adult supervision to associate more with friends who engage in prosocial activities may eventually reduce adolescent smoking. Further implications for structured supervised time for students outside of school time are discussed.
NASA Technical Reports Server (NTRS)
Hall, Lawrence O.; Bensaid, Amine M.; Clarke, Laurence P.; Velthuizen, Robert P.; Silbiger, Martin S.; Bezdek, James C.
1992-01-01
Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms and a supervised computational neural network, a dynamic multilayered perception trained with the cascade correlation learning algorithm. Initial clinical results are presented on both normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. However, for a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed.
Perturbation analyses of intermolecular interactions
NASA Astrophysics Data System (ADS)
Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.
2011-08-01
Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.
Perturbation analyses of intermolecular interactions.
Koyama, Yohei M; Kobayashi, Tetsuya J; Ueda, Hiroki R
2011-08-01
Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.
Inayama, T; Kashiwazaki, H; Sakamoto, M
1998-12-01
We tried to analyze synthetically teachers' view points associated with health education and roles of school lunch in primary education. For this purpose, a survey using an open-ended questionnaire consisting of eight items relating to health education in the school curriculum was carried out in 100 teachers of ten public primary schools. Subjects were asked to describe their view regarding the following eight items: 1) health and physical guidance education, 2) school lunch guidance education, 3) pupils' attitude toward their own health and nutrition, 4) health education, 5) role of school lunch in education, 6) future subjects of health education, 7) class room lesson related to school lunch, 8) guidance in case of pupil with unbalanced dieting and food avoidance. Subjects described their own opinions on an open-ended questionnaire response sheet. Keywords in individual descriptions were selected, rearranged and classified into categories according to their own meanings, and each of the selected keywords were used as the dummy variable. To assess individual opinions synthetically, a principal component analysis was then applied to the variables collected through the teachers' descriptions, and four factors were extracted. The results were as follows. 1) Four factors obtained from the repeated principal component analysis were summarized as; roles of health education and school lunch program (the first principal component), cooperation with nurse-teachers and those in charge of lunch service (the second principal component), time allocation for health education in home-room activity and lunch time (the third principal component) and contents of health education and school lunch guidance and their future plan (the fourth principal component). 2) Teachers regarded the role of school lunch in primary education as providing daily supply of nutrients, teaching of table manners and building up friendships with classmates, health education and food and nutrition education, and developing food preferences through eating lunch together with classmates. 3) Significant positive correlation was observed between "the teachers' opinion about the role of school lunch of providing opportunity to learn good behavior for food preferences through eating lunch together with classmates" and the first principal component "roles of health education and school lunch program" (r = 0.39, p < 0.01). The variable "the role of school lunch is health education and food and nutrition education" showed positive correlation with the principle component "cooperation with nurse-teachers and those in charge of lunch service" (r = 0.27, p < 0.01). Interesting relationships obtained were that teachers with longer educational experience tended to place importance in health education and food and nutrition education as the role of school lunch, and that male teachers regarded the roles of school lunch more importantly for future education in primary education than female teachers did.
Phenomenology of mixed states: a principal component analysis study.
Bertschy, G; Gervasoni, N; Favre, S; Liberek, C; Ragama-Pardos, E; Aubry, J-M; Gex-Fabry, M; Dayer, A
2007-12-01
To contribute to the definition of external and internal limits of mixed states and study the place of dysphoric symptoms in the psychopathology of mixed states. One hundred and sixty-five inpatients with major mood episodes were diagnosed as presenting with either pure depression, mixed depression (depression plus at least three manic symptoms), full mixed state (full depression and full mania), mixed mania (mania plus at least three depressive symptoms) or pure mania, using an adapted version of the Mini International Neuropsychiatric Interview (DSM-IV version). They were evaluated using a 33-item inventory of depressive, manic and mixed affective signs and symptoms. Principal component analysis without rotation yielded three components that together explained 43.6% of the variance. The first component (24.3% of the variance) contrasted typical depressive symptoms with typical euphoric, manic symptoms. The second component, labeled 'dysphoria', (13.8%) had strong positive loadings for irritability, distressing sensitivity to light and noise, impulsivity and inner tension. The third component (5.5%) included symptoms of insomnia. Median scores for the first component significantly decreased from the pure depression group to the pure mania group. For the dysphoria component, scores were highest among patients with full mixed states and decreased towards both patients with pure depression and those with pure mania. Principal component analysis revealed that dysphoria represents an important dimension of mixed states.
A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample
Chang, Yu-Yen; Chao, Rikon; Wang, Wei-Hao; ...
2012-01-01
Disney emore » t al. (2008) have found a striking correlation among global parameters of H i -selected galaxies and concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L -band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color ( g - i ) is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color ( i - J ) shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.« less
Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy
NASA Astrophysics Data System (ADS)
Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee
2016-04-01
Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
Kastberger, G; Kranner, G
2000-02-01
Viscovery SOMine is a software tool for advanced analysis and monitoring of numerical data sets. It was developed for professional use in business, industry, and science and to support dependency analysis, deviation detection, unsupervised clustering, nonlinear regression, data association, pattern recognition, and animated monitoring. Based on the concept of self-organizing maps (SOMs), it employs a robust variant of unsupervised neural networks--namely, Kohonen's Batch-SOM, which is further enhanced with a new scaling technique for speeding up the learning process. This tool provides a powerful means by which to analyze complex data sets without prior statistical knowledge. The data representation contained in the trained SOM is systematically converted to be used in a spectrum of visualization techniques, such as evaluating dependencies between components, investigating geometric properties of the data distribution, searching for clusters, or monitoring new data. We have used this software tool to analyze and visualize multiple influences of the ocellar system on free-flight behavior in giant honeybees. Occlusion of ocelli will affect orienting reactivities in relation to flight target, level of disturbance, and position of the bee in the flight chamber; it will induce phototaxis and make orienting imprecise and dependent on motivational settings. Ocelli permit the adjustment of orienting strategies to environmental demands by enforcing abilities such as centering or flight kinetics and by providing independent control of posture and flight course.
Intelligent mapping of alluvial aquifer characteristics in the Otago region, New Zealand
NASA Astrophysics Data System (ADS)
Friedel, Michael; Rawlinson, Zara; Westerhoff, Rogier
2015-04-01
We adopt a hybrid approach to map the 3D hydrostratigraphy of an alluvial aquifer using big data collected in the Ettrick basin, Otago New Zealand. First, a subset (1%) of the 18 million regional helicopter frequency-domain electromagnetic (HEM) sounding measurements (300 Hz, Horizontal co-planar; 3300 Hz, vertical co-planar; 8200 Hz, horizontal co-planar; 40 kHz, horizontal co-planar; 137 kHz horizontal coplanar) and their numerically-inverted 1D resistivity (50¬-100 Ω-m) profiles are randomly split. For example, 50% of these data are used for training an unsupervised machine-learning (ML) network, and 50% of these data are used for performance at independent locations. The remaining set of HEM measurements are then presented to the vetted ML network to estimate regional resistivity structure which is compared to previously inverted resistivity. Second, about 50 borehole autocorrelation functions are computed based on cross-component correlations of quantized borehole locations sampled for lithology and HEM sounding data. Third, an unsupervised ML network is trained and performance tested using sparse borehole lithology (fractions of sand, silt, clay, mudstone, schist) and hydraulic properties (storage, hydraulic conductivity), and those HEM sounding data occurring within a radius defined by the maximum borehole autocorrelation distances. Fourth, this ML network is then used together with independent HEM sounding measurements to map the spatial distribution of physical aquifer properties and hydraulic properties across the basin.
Geospatiotemporal Data Mining of Remotely Sensed Phenology for Unsupervised Forest Threat Detection
NASA Astrophysics Data System (ADS)
Mills, R. T.; Hoffman, F. M.; Kumar, J.; Vulli, S. S.; Hargrove, W. W.; Spruce, J.
2010-12-01
Hargrove and Hoffman have previously developed and applied a scalable geospatiotemporal data mining approach to define a set of categorical, multivariate classes or states for describing and tracking the behavior of ecosystem properties through time within a multi-dimensional phase or state space. The method employs a standard k-means cluster analysis with enhancements that reduce the number of required comparisons, dramatically accelerating iterative convergence. In support of efforts by the USDA Forest Service to develop a National Early Warning System for Forest Disturbances, we have applied this geospatiotemporal cluster analysis procedure to annual phenology patterns derived from Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) for unsupervised change detection. We will present initial results from the analysis of seven years of 250-m MODIS NDVI data for the conterminous United States. While determining what constitutes a "normal" phenological pattern for any given location is challenging due to interannual climate variability, a spatially varying climate change trend, and the relatively short record of MODIS NDVI observations, these results demonstrate the utility of the method for detecting significant mortality events, like the progressive damage from mountain pine beetle, and suggest that the technique may be successfully implemented as a key component in an early warning system for identifying forest threats from natural and anthropogenic disturbances at a continental scale.
Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude
2003-01-01
Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.
HT-FRTC: a fast radiative transfer code using kernel regression
NASA Astrophysics Data System (ADS)
Thelen, Jean-Claude; Havemann, Stephan; Lewis, Warren
2016-09-01
The HT-FRTC is a principal component based fast radiative transfer code that can be used across the electromagnetic spectrum from the microwave through to the ultraviolet to calculate transmittance, radiance and flux spectra. The principal components cover the spectrum at a very high spectral resolution, which allows very fast line-by-line, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. The principal components are derived during a code training phase from line-by-line simulations for a diverse set of atmosphere and surface conditions. The derived principal components are sensor independent, i.e. no extra training is required to include additional sensors. During the training phase we also derive the predictors which are required by the fast radiative transfer code to determine the principal component scores from the monochromatic radiances (or fluxes, transmittances). These predictors are calculated for each training profile at a small number of frequencies, which are selected by a k-means cluster algorithm during the training phase. Until recently the predictors were calculated using a linear regression. However, during a recent rewrite of the code the linear regression was replaced by a Gaussian Process (GP) regression which resulted in a significant increase in accuracy when compared to the linear regression. The HT-FRTC has been trained with a large variety of gases, surface properties and scatterers. Rayleigh scattering as well as scattering by frozen/liquid clouds, hydrometeors and aerosols have all been included. The scattering phase function can be fully accounted for by an integrated line-by-line version of the Edwards-Slingo spherical harmonics radiation code or approximately by a modification to the extinction (Chou scaling).
Spectral decomposition of asteroid Itokawa based on principal component analysis
NASA Astrophysics Data System (ADS)
Koga, Sumire C.; Sugita, Seiji; Kamata, Shunichi; Ishiguro, Masateru; Hiroi, Takahiro; Tatsumi, Eri; Sasaki, Sho
2018-01-01
The heliocentric stratification of asteroid spectral types may hold important information on the early evolution of the Solar System. Asteroid spectral taxonomy is based largely on principal component analysis. However, how the surface properties of asteroids, such as the composition and age, are projected in the principal-component (PC) space is not understood well. We decompose multi-band disk-resolved visible spectra of the Itokawa surface with principal component analysis (PCA) in comparison with main-belt asteroids. The obtained distribution of Itokawa spectra projected in the PC space of main-belt asteroids follows a linear trend linking the Q-type and S-type regions and is consistent with the results of space-weathering experiments on ordinary chondrites and olivine, suggesting that this trend may be a space-weathering-induced spectral evolution track for S-type asteroids. Comparison with space-weathering experiments also yield a short average surface age (< a few million years) for Itokawa, consistent with the cosmic-ray-exposure time of returned samples from Itokawa. The Itokawa PC score distribution exhibits asymmetry along the evolution track, strongly suggesting that space weathering has begun saturated on this young asteroid. The freshest spectrum found on Itokawa exhibits a clear sign for space weathering, indicating again that space weathering occurs very rapidly on this body. We also conducted PCA on Itokawa spectra alone and compared the results with space-weathering experiments. The obtained results indicate that the first principal component of Itokawa surface spectra is consistent with spectral change due to space weathering and that the spatial variation in the degree of space weathering is very large (a factor of three in surface age), which would strongly suggest the presence of strong regional/local resurfacing process(es) on this small asteroid.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
NASA Astrophysics Data System (ADS)
Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee
2015-03-01
Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.
Leadership Coaching: A Multiple-Case Study of Urban Public Charter School Principals' Experiences
ERIC Educational Resources Information Center
Lackritz, Anne D.
2017-01-01
This multi-case study seeks to understand the experiences of New York City and Washington, DC public charter school principals who have experienced leadership coaching, a component of leadership development, beyond their novice years. The research questions framing this study address how experienced public charter school principals describe the…
The View from the Principal's Office: An Observation Protocol Boosts Literacy :eadership
ERIC Educational Resources Information Center
Novak, Sandi; Houck, Bonnie
2016-01-01
The Minnesota Elementary School Principals' Association offered Minnesota principals professional learning that placed a high priority on literacy instruction and developing a collegial culture. A key component is the literacy classroom visit, an observation protocol used to gather data to determine the status of literacy teaching and student…
ERIC Educational Resources Information Center
Agnew, David W.
2011-01-01
Public school principals must meet many challenges and make decisions concerning financial obligations while providing the best learning environment for students. A major challenge to principals is implementing technological components successfully while providing teachers the 21st century instructional skills needed to enhance students'…
Rough Set Based Splitting Criterion for Binary Decision Tree Classifiers
2006-09-26
Alata O. Fernandez-Maloigne C., and Ferrie J.C. (2001). Unsupervised Algorithm for the Segmentation of Three-Dimensional Magnetic Resonance Brain ...instinctual and learned responses in the brain , causing it to make decisions based on patterns in the stimuli. Using this deceptively simple process...2001. [2] Bohn C. (1997). An Incremental Unsupervised Learning Scheme for Function Approximation. In: Proceedings of the 1997 IEEE International
ERIC Educational Resources Information Center
Snyder, Robin M.
2015-01-01
The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…
ERIC Educational Resources Information Center
Ladyshewsky, Richard K.
2015-01-01
This research explores differences in multiple choice test (MCT) scores in a cohort of post-graduate students enrolled in a management and leadership course. A total of 250 students completed the MCT in either a supervised in-class paper and pencil test or an unsupervised online test. The only statistically significant difference between the nine…
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Liang, Xuedong; Liu, Canmian; Li, Zhi
2017-01-01
In connection with the sustainable development of scenic spots, this paper, with consideration of resource conditions, economic benefits, auxiliary industry scale and ecological environment, establishes a comprehensive measurement model of the sustainable capacity of scenic spots; optimizes the index system by principal components analysis to extract principal components; assigns the weight of principal components by entropy method; analyzes the sustainable capacity of scenic spots in each province of China comprehensively in combination with TOPSIS method and finally puts forward suggestions aid decision-making. According to the study, this method provides an effective reference for the study of the sustainable development of scenic spots and is very significant for considering the sustainable development of scenic spots and auxiliary industries to establish specific and scientific countermeasures for improvement. PMID:29271947
The variance needed to accurately describe jump height from vertical ground reaction force data.
Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran
2014-12-01
In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.
Liang, Xuedong; Liu, Canmian; Li, Zhi
2017-12-22
In connection with the sustainable development of scenic spots, this paper, with consideration of resource conditions, economic benefits, auxiliary industry scale and ecological environment, establishes a comprehensive measurement model of the sustainable capacity of scenic spots; optimizes the index system by principal components analysis to extract principal components; assigns the weight of principal components by entropy method; analyzes the sustainable capacity of scenic spots in each province of China comprehensively in combination with TOPSIS method and finally puts forward suggestions aid decision-making. According to the study, this method provides an effective reference for the study of the sustainable development of scenic spots and is very significant for considering the sustainable development of scenic spots and auxiliary industries to establish specific and scientific countermeasures for improvement.
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Brown, Justin C; Ko, Emily M; Schmitz, Kathryn H
2015-02-01
The health benefits of exercise increase in dose-response fashion among cancer survivors. However, it is unclear how to identify cancer survivors who may require a pre-exercise evaluation before they progress from the common recommendation of walking to unsupervised moderate- to vigorous-intensity exercise. To clarify how to identify cancer survivors who should undergo a pre-exercise evaluation before they progress from the common recommendation of walking to unsupervised moderate- to vigorous-intensity exercise. Electronic survey. Forty-seven (n = 47) experts in the field of exercise physiology, rehabilitation medicine, and cancer survivorship. Not applicable. We synthesized peer-reviewed guidelines for exercise and cancer survivorship and identified 82 health factors that may warrant a pre-exercise evaluation before a survivor engages in unsupervised moderate- to vigorous-intensity exercise. The 82 health factors were classified into 3 domains: (1) clinical health factors; (2) comorbidity and device health factors; and (3) medications. We surveyed a sample of experts asking them to identify which of the 82 health factors among cancer survivors would indicate the need for a pre-exercise evaluation before they engaged in moderate- to vigorous-intensity exercise. The response rate to our survey was 75% (n = 47). Across the 3 domains of health factors, acute symptoms, comorbidities, and medications related to cardiovascular disease were agreed on to indicate a pre-exercise evaluation for survivors before they engaged in unsupervised moderate- to vigorous-intensity exercise. Other health factors in the survey included hematologic, musculoskeletal, systemic, gastrointestinal, pulmonary, and neurological symptoms and comorbidities. Eighteen experts (38%) said it was difficult to provide absolute answers because no 2 patients are alike, and their decisions are made on a case-by-case basis. The results from this expert survey will help to identify which cancer survivors should undergo a pre-exercise evaluation before they engage in unsupervised moderate- to vigorous-intensity exercise. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Richard Tran Mills; Jitendra Kumar; Forrest M. Hoffman; William W. Hargrove; Joseph P. Spruce; Steven P. Norman
2013-01-01
We investigated the use of principal components analysis (PCA) to visualize dominant patterns and identify anomalies in a multi-year land surface phenology data set (231 m à 231 m normalized difference vegetation index (NDVI) values derived from the Moderate Resolution Imaging Spectroradiometer (MODIS)) used for detecting threats to forest health in the conterminous...
Multivariate analysis of light scattering spectra of liquid dairy products
NASA Astrophysics Data System (ADS)
Khodasevich, M. A.
2010-05-01
Visible light scattering spectra from the surface layer of samples of commercial liquid dairy products are recorded with a colorimeter. The principal component method is used to analyze these spectra. Vectors representing the samples of dairy products in a multidimensional space of spectral counts are projected onto a three-dimensional subspace of principal components. The magnitudes of these projections are found to depend on the type of dairy product.
James R. Wallis
1965-01-01
Written in Fortran IV and MAP, this computer program can handle up to 120 variables, and retain 40 principal components. It can perform simultaneous regression of up to 40 criterion variables upon the varimax rotated factor weight matrix. The columns and rows of all output matrices are labeled by six-character alphanumeric names. Data input can be from punch cards or...
Dihedral angle principal component analysis of molecular dynamics simulations.
Altis, Alexandros; Nguyen, Phuong H; Hegger, Rainer; Stock, Gerhard
2007-06-28
It has recently been suggested by Mu et al. [Proteins 58, 45 (2005)] to use backbone dihedral angles instead of Cartesian coordinates in a principal component analysis of molecular dynamics simulations. Dihedral angles may be advantageous because internal coordinates naturally provide a correct separation of internal and overall motion, which was found to be essential for the construction and interpretation of the free energy landscape of a biomolecule undergoing large structural rearrangements. To account for the circular statistics of angular variables, a transformation from the space of dihedral angles {phi(n)} to the metric coordinate space {x(n)=cos phi(n),y(n)=sin phi(n)} was employed. To study the validity and the applicability of the approach, in this work the theoretical foundations underlying the dihedral angle principal component analysis (dPCA) are discussed. It is shown that the dPCA amounts to a one-to-one representation of the original angle distribution and that its principal components can readily be characterized by the corresponding conformational changes of the peptide. Furthermore, a complex version of the dPCA is introduced, in which N angular variables naturally lead to N eigenvalues and eigenvectors. Applying the methodology to the construction of the free energy landscape of decaalanine from a 300 ns molecular dynamics simulation, a critical comparison of the various methods is given.