Identifying apple surface defects using principal components analysis and artifical neural networks
USDA-ARS?s Scientific Manuscript database
Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba
2003-01-01
A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.
Psychometric Measurement Models and Artificial Neural Networks
ERIC Educational Resources Information Center
Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.
2004-01-01
The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…
Principal Component Analysis Based Measure of Structural Holes
NASA Astrophysics Data System (ADS)
Deng, Shiguo; Zhang, Wenqing; Yang, Huijie
2013-02-01
Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.
Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis
NASA Astrophysics Data System (ADS)
Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei
2018-01-01
In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.
Goekoop, Rutger; Goekoop, Jaap G
2014-01-01
The vast number of psychopathological syndromes that can be observed in clinical practice can be described in terms of a limited number of elementary syndromes that are differentially expressed. Previous attempts to identify elementary syndromes have shown limitations that have slowed progress in the taxonomy of psychiatric disorders. To examine the ability of network community detection (NCD) to identify elementary syndromes of psychopathology and move beyond the limitations of current classification methods in psychiatry. 192 patients with unselected mental disorders were tested on the Comprehensive Psychopathological Rating Scale (CPRS). Principal component analysis (PCA) was performed on the bootstrapped correlation matrix of symptom scores to extract the principal component structure (PCS). An undirected and weighted network graph was constructed from the same matrix. Network community structure (NCS) was optimized using a previously published technique. In the optimal network structure, network clusters showed a 89% match with principal components of psychopathology. Some 6 network clusters were found, including "Depression", "Mania", "Anxiety", "Psychosis", "Retardation", and "Behavioral Disorganization". Network metrics were used to quantify the continuities between the elementary syndromes. We present the first comprehensive network graph of psychopathology that is free from the biases of previous classifications: a 'Psychopathology Web'. Clusters within this network represent elementary syndromes that are connected via a limited number of bridge symptoms. Many problems of previous classifications can be overcome by using a network approach to psychopathology.
Burst and Principal Components Analyses of MEA Data Separates Chemicals by Class
Microelectrode arrays (MEAs) detect drug and chemical induced changes in action potential "spikes" in neuronal networks and can be used to screen chemicals for neurotoxicity. Analytical "fingerprinting," using Principal Components Analysis (PCA) on spike trains recorded from prim...
Goekoop, Rutger; Goekoop, Jaap G.
2014-01-01
Introduction The vast number of psychopathological syndromes that can be observed in clinical practice can be described in terms of a limited number of elementary syndromes that are differentially expressed. Previous attempts to identify elementary syndromes have shown limitations that have slowed progress in the taxonomy of psychiatric disorders. Aim To examine the ability of network community detection (NCD) to identify elementary syndromes of psychopathology and move beyond the limitations of current classification methods in psychiatry. Methods 192 patients with unselected mental disorders were tested on the Comprehensive Psychopathological Rating Scale (CPRS). Principal component analysis (PCA) was performed on the bootstrapped correlation matrix of symptom scores to extract the principal component structure (PCS). An undirected and weighted network graph was constructed from the same matrix. Network community structure (NCS) was optimized using a previously published technique. Results In the optimal network structure, network clusters showed a 89% match with principal components of psychopathology. Some 6 network clusters were found, including "DEPRESSION", "MANIA", “ANXIETY”, "PSYCHOSIS", "RETARDATION", and "BEHAVIORAL DISORGANIZATION". Network metrics were used to quantify the continuities between the elementary syndromes. Conclusion We present the first comprehensive network graph of psychopathology that is free from the biases of previous classifications: a ‘Psychopathology Web’. Clusters within this network represent elementary syndromes that are connected via a limited number of bridge symptoms. Many problems of previous classifications can be overcome by using a network approach to psychopathology. PMID:25427156
Microelectrode arrays (MEAs) detect drug and chemical induced changes in neuronal network function and have been used for neurotoxicity screening. As a proof-•of-concept, the current study assessed the utility of analytical "fingerprinting" using Principal Components Analysis (P...
Online signature recognition using principal component analysis and artificial neural network
NASA Astrophysics Data System (ADS)
Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan
2016-12-01
In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.
The Influence Function of Principal Component Analysis by Self-Organizing Rule.
Higuchi; Eguchi
1998-07-28
This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.
Study on pattern recognition of Raman spectrum based on fuzzy neural network
NASA Astrophysics Data System (ADS)
Zheng, Xiangxiang; Lv, Xiaoyi; Mo, Jiaqing
2017-10-01
Hydatid disease is a serious parasitic disease in many regions worldwide, especially in Xinjiang, China. Raman spectrum of the serum of patients with echinococcosis was selected as the research object in this paper. The Raman spectrum of blood samples from healthy people and patients with echinococcosis are measured, of which the spectrum characteristics are analyzed. The fuzzy neural network not only has the ability of fuzzy logic to deal with uncertain information, but also has the ability to store knowledge of neural network, so it is combined with the Raman spectrum on the disease diagnosis problem based on Raman spectrum. Firstly, principal component analysis (PCA) is used to extract the principal components of the Raman spectrum, reducing the network input and accelerating the prediction speed and accuracy of Network based on remaining the original data. Then, the information of the extracted principal component is used as the input of the neural network, the hidden layer of the network is the generation of rules and the inference process, and the output layer of the network is fuzzy classification output. Finally, a part of samples are randomly selected for the use of training network, then the trained network is used for predicting the rest of the samples, and the predicted results are compared with general BP neural network to illustrate the feasibility and advantages of fuzzy neural network. Success in this endeavor would be helpful for the research work of spectroscopic diagnosis of disease and it can be applied in practice in many other spectral analysis technique fields.
Learning Principal Component Analysis by Using Data from Air Quality Networks
ERIC Educational Resources Information Center
Perez-Arribas, Luis Vicente; Leon-González, María Eugenia; Rosales-Conrado, Noelia
2017-01-01
With the final objective of using computational and chemometrics tools in the chemistry studies, this paper shows the methodology and interpretation of the Principal Component Analysis (PCA) using pollution data from different cities. This paper describes how students can obtain data on air quality and process such data for additional information…
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Surajit; Chattopadhyay, Goutami
2012-10-01
In the work discussed in this paper we considered total ozone time series over Kolkata (22°34'10.92″N, 88°22'10.92″E), an urban area in eastern India. Using cloud cover, average temperature, and rainfall as the predictors, we developed an artificial neural network, in the form of a multilayer perceptron with sigmoid non-linearity, for prediction of monthly total ozone concentrations from values of the predictors in previous months. We also estimated total ozone from values of the predictors in the same month. Before development of the neural network model we removed multicollinearity by means of principal component analysis. On the basis of the variables extracted by principal component analysis, we developed three artificial neural network models. By rigorous statistical assessment it was found that cloud cover and rainfall can act as good predictors for monthly total ozone when they are considered as the set of input variables for the neural network model constructed in the form of a multilayer perceptron. In general, the artificial neural network has good potential for predicting and estimating monthly total ozone on the basis of the meteorological predictors. It was further observed that during pre-monsoon and winter seasons, the proposed models perform better than during and after the monsoon.
Dynamic competitive probabilistic principal components analysis.
López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel
2009-04-01
We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.
Automatic Detection of Nausea Using Bio-Signals During Immerging in A Virtual Reality Environment
2001-10-25
reduce the redundancy in those parameters, and constructed an artificial neural network with those principal components. Using the network we constructed, we could partially detect nausea in real time.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483
Development of neural network techniques for finger-vein pattern classification
NASA Astrophysics Data System (ADS)
Wu, Jian-Da; Liu, Chiung-Tsiung; Tsai, Yi-Jang; Liu, Jun-Ching; Chang, Ya-Wen
2010-02-01
A personal identification system using finger-vein patterns and neural network techniques is proposed in the present study. In the proposed system, the finger-vein patterns are captured by a device that can transmit near infrared through the finger and record the patterns for signal analysis and classification. The biometric system for verification consists of a combination of feature extraction using principal component analysis and pattern classification using both back-propagation network and adaptive neuro-fuzzy inference systems. Finger-vein features are first extracted by principal component analysis method to reduce the computational burden and removes noise residing in the discarded dimensions. The features are then used in pattern classification and identification. To verify the effect of the proposed adaptive neuro-fuzzy inference system in the pattern classification, the back-propagation network is compared with the proposed system. The experimental results indicated the proposed system using adaptive neuro-fuzzy inference system demonstrated a better performance than the back-propagation network for personal identification using the finger-vein patterns.
Dordek, Yedidyah; Soudry, Daniel; Meir, Ron; Derdikman, Dori
2016-01-01
Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is −1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA. DOI: http://dx.doi.org/10.7554/eLife.10094.001 PMID:26952211
Kalegowda, Yogesh; Harmer, Sarah L
2013-01-08
Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
INTEGRATED ENVIRONMENTAL ASSESSMENT OF THE MID-ATLANTIC REGION WITH ANALYTICAL NETWORK PROCESS
A decision analysis method for integrating environmental indicators was developed. This was a combination of Principal Component Analysis (PCA) and the Analytic Network Process (ANP). Being able to take into account interdependency among variables, the method was capable of ran...
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
NASA Astrophysics Data System (ADS)
Whaley, Gregory J.; Karnopp, Roger J.
2010-04-01
The goal of the Air Force Highly Integrated Photonics (HIP) program is to develop and demonstrate single photonic chip components which support a single mode fiber network architecture for use on mobile military platforms. We propose an optically transparent, broadcast and select fiber optic network as the next generation interconnect on avionics platforms. In support of this network, we have developed three principal, single-chip photonic components: a tunable laser transmitter, a 32x32 port star coupler, and a 32 port multi-channel receiver which are all compatible with demanding avionics environmental and size requirements. The performance of the developed components will be presented as well as the results of a demonstration system which integrates the components into a functional network representative of the form factor used in advanced avionics computing and signal processing applications.
NASA Astrophysics Data System (ADS)
Nasertdinova, A. D.; Bochkarev, V. V.
2017-11-01
Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.
Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui
2010-01-01
Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.
Ding, Haiquan; Lu, Qipeng; Gao, Hongzhi; Peng, Zhongqi
2014-01-01
To facilitate non-invasive diagnosis of anemia, specific equipment was developed, and non-invasive hemoglobin (HB) detection method based on back propagation artificial neural network (BP-ANN) was studied. In this paper, we combined a broadband light source composed of 9 LEDs with grating spectrograph and Si photodiode array, and then developed a high-performance spectrophotometric system. By using this equipment, fingertip spectra of 109 volunteers were measured. In order to deduct the interference of redundant data, principal component analysis (PCA) was applied to reduce the dimensionality of collected spectra. Then the principal components of the spectra were taken as input of BP-ANN model. On this basis we obtained the optimal network structure, in which node numbers of input layer, hidden layer, and output layer was 9, 11, and 1. Calibration and correction sample sets were used for analyzing the accuracy of non-invasive hemoglobin measurement, and prediction sample set was used for testing the adaptability of the model. The correlation coefficient of network model established by this method is 0.94, standard error of calibration, correction, and prediction are 11.29g/L, 11.47g/L, and 11.01g/L respectively. The result proves that there exist good correlations between spectra of three sample sets and actual hemoglobin level, and the model has a good robustness. It is indicated that the developed spectrophotometric system has potential for the non-invasive detection of HB levels with the method of BP-ANN combined with PCA. PMID:24761296
Strale, Mathieu; Krysinska, Karolina; Overmeiren, Gaëtan Van; Andriessen, Karl
2017-06-01
This study investigated the geographic distribution of suicide and railway suicide in Belgium over 2008--2013 on local (i.e., district or arrondissement) level. There were differences in the regional distribution of suicide and railway suicides in Belgium over the study period. Principal component analysis identified three groups of correlations among population variables and socio-economic indicators, such as population density, unemployment, and age group distribution, on two components that helped explaining the variance of railway suicide at a local (arrondissement) level. This information is of particular importance to prevent suicides in high-risk areas on the Belgian railway network.
Carbonell, Felix; Bellec, Pierre; Shmuel, Amir
2011-01-01
The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)-based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations.
The variance needed to accurately describe jump height from vertical ground reaction force data.
Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran
2014-12-01
In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.
Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters.
Tao, Dapeng; Lin, Xu; Jin, Lianwen; Li, Xuelong
2016-03-01
Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
Structural aspects of face recognition and the other-race effect.
O'Toole, A J; Deffenbacher, K A; Valentin, D; Abdi, H
1994-03-01
The other-race effect was examined in a series of experiments and simulations that looked at the relationships among observer ratings of typicality, familiarity, attractiveness, memorability, and the performance variables of d' and criterion. Experiment 1 replicated the other-race effect with our Caucasian and Japanese stimuli for both Caucasian and Asian observers. In Experiment 2, we collected ratings from Caucasian observers on the faces used in the recognition task. A Varimax-rotated principal components analysis on the rating and performance data for the Caucasian faces replicated Vokey and Read's (1992) finding that typicality is composed of two orthogonal components, dissociable via their independent relationships to: (1) attractiveness and familiarity ratings and (2) memorability ratings. For Japanese faces, however, we found that typicality was related only to memorability. Where performance measures were concerned, two additional principal components dominated by criterion and by d' emerged for Caucasian faces. For the Japanese faces, however, the performance measures of d' and criterion merged into a single component that represented a second component of typicality, one orthogonal to the memorability-dominated component. A measure of face representation quality extracted from an autoassociative neural network trained with a majority of Caucasian faces and a minority of Japanese faces was incorporated into the principal components analysis. For both Caucasian and Japanese faces, the neural network measure related both to memorability ratings and to human accuracy measures. Combined, the human data and simulation results indicate that the memorability component of typicality may be related to small, local, distinctive features, whereas the attractiveness/familiarity component may be more related to the global, shape-based properties of the face.
Ciucci, Sara; Ge, Yan; Durán, Claudio; Palladini, Alessandra; Jiménez-Jiménez, Víctor; Martínez-Sánchez, Luisa María; Wang, Yuting; Sales, Susanne; Shevchenko, Andrej; Poser, Steven W.; Herbig, Maik; Otto, Oliver; Androutsellis-Theotokis, Andreas; Guck, Jochen; Gerl, Mathias J.; Cannistraci, Carlo Vittorio
2017-01-01
Omic science is rapidly growing and one of the most employed techniques to explore differential patterns in omic datasets is principal component analysis (PCA). However, a method to enlighten the network of omic features that mostly contribute to the sample separation obtained by PCA is missing. An alternative is to build correlation networks between univariately-selected significant omic features, but this neglects the multivariate unsupervised feature compression responsible for the PCA sample segregation. Biologists and medical researchers often prefer effective methods that offer an immediate interpretation to complicated algorithms that in principle promise an improvement but in practice are difficult to be applied and interpreted. Here we present PC-corr: a simple algorithm that associates to any PCA segregation a discriminative network of features. Such network can be inspected in search of functional modules useful in the definition of combinatorial and multiscale biomarkers from multifaceted omic data in systems and precision biomedicine. We offer proofs of PC-corr efficacy on lipidomic, metagenomic, developmental genomic, population genetic, cancer promoteromic and cancer stem-cell mechanomic data. Finally, PC-corr is a general functional network inference approach that can be easily adopted for big data exploration in computer science and analysis of complex systems in physics. PMID:28287094
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Fu, Xiaping; Lu, Huishan
2005-11-01
The artificial neural networks (ANNs) have been used successfully in applications such as pattern recognition, image processing, automation and control. However, majority of today's applications of ANNs is back-propagate feed-forward ANN (BP-ANN). In this paper, back-propagation artificial neural networks (BP-ANN) were applied for modeling soluble solid content (SSC) of intact pear from their Fourier transform near infrared (FT-NIR) spectra. One hundred and sixty-four pear samples were used to build the calibration models and evaluate the models predictive ability. The results are compared to the classical calibration approaches, i.e. principal component regression (PCR), partial least squares (PLS) and non-linear PLS (NPLS). The effects of the optimal methods of training parameters on the prediction model were also investigated. BP-ANN combine with principle component regression (PCR) resulted always better than the classical PCR, PLS and Weight-PLS methods, from the point of view of the predictive ability. Based on the results, it can be concluded that FT-NIR spectroscopy and BP-ANN models can be properly employed for rapid and nondestructive determination of fruit internal quality.
Examination of a Social-Networking Site Activities Scale (SNSAS) Using Rasch Analysis
ERIC Educational Resources Information Center
Alhaythami, Hassan; Karpinski, Aryn; Kirschner, Paul; Bolden, Edward
2017-01-01
This study examined the psychometric properties of a social-networking site (SNS) activities scale (SNSAS) using Rasch Analysis. Items were also examined with Rasch Principal Components Analysis (PCA) and Differential Item Functioning (DIF) across groups of university students (i.e., males and females from the United States [US] and Europe; N =…
Kesharaju, Manasa; Nagarajah, Romesh
2015-09-01
The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Carbonell, Felix; Bellec, Pierre
2011-01-01
Abstract The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)–based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations. PMID:22444074
Villas-Boas, Mariana D; Olivera, Francisco; de Azevedo, Jose Paulo S
2017-09-01
Water quality monitoring is a complex issue that requires support tools in order to provide information for water resource management. Budget constraints as well as an inadequate water quality network design call for the development of evaluation tools to provide efficient water quality monitoring. For this purpose, a nonlinear principal component analysis (NLPCA) based on an autoassociative neural network was performed to assess the redundancy of the parameters and monitoring locations of the water quality network in the Piabanha River watershed. Oftentimes, a small number of variables contain the most relevant information, while the others add little or no interpretation to the variability of water quality. Principal component analysis (PCA) is widely used for this purpose. However, conventional PCA is not able to capture the nonlinearities of water quality data, while neural networks can represent those nonlinear relationships. The results presented in this work demonstrate that NLPCA performs better than PCA in the reconstruction of the water quality data of Piabanha watershed, explaining most of data variance. From the results of NLPCA, the most relevant water quality parameter is fecal coliforms (FCs) and the least relevant is chemical oxygen demand (COD). Regarding the monitoring locations, the most relevant is Poço Tarzan (PT) and the least is Parque Petrópolis (PP).
Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.
2003-04-01
A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}
Detection of micro solder balls using active thermography and probabilistic neural network
NASA Astrophysics Data System (ADS)
He, Zhenzhi; Wei, Li; Shao, Minghui; Lu, Xingning
2017-03-01
Micro solder ball/bump has been widely used in electronic packaging. It has been challenging to inspect these structures as the solder balls/bumps are often embedded between the component and substrates, especially in flip-chip packaging. In this paper, a detection method for micro solder ball/bump based on the active thermography and the probabilistic neural network is investigated. A VH680 infrared imager is used to capture the thermal image of the test vehicle, SFA10 packages. The temperature curves are processed using moving average technique to remove the peak noise. And the principal component analysis (PCA) is adopted to reconstruct the thermal images. The missed solder balls can be recognized explicitly in the second principal component image. Probabilistic neural network (PNN) is then established to identify the defective bump intelligently. The hot spots corresponding to the solder balls are segmented from the PCA reconstructed image, and statistic parameters are calculated. To characterize the thermal properties of solder bump quantitatively, three representative features are selected and used as the input vector in PNN clustering. The results show that the actual outputs and the expected outputs are consistent in identification of the missed solder balls, and all the bumps were recognized accurately, which demonstrates the viability of the PNN in effective defect inspection in high-density microelectronic packaging.
Machine learning of frustrated classical spin models. I. Principal component analysis
NASA Astrophysics Data System (ADS)
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
VOLTAGE-CONTROLLED TRANSISTOR OSCILLATOR
Scheele, P.F.
1958-09-16
This patent relates to transistor oscillators and in particular to those transistor oscillators whose frequencies vary according to controlling voltages. A principal feature of the disclosed transistor oscillator circuit resides in the temperature compensation of the frequency modulating stage by the use of a resistorthermistor network. The resistor-thermistor network components are selected to have the network resistance, which is in series with the modulator transistor emitter circuit, vary with temperature to compensate for variation in the parameters of the transistor due to temperature change.
Short-term PV/T module temperature prediction based on PCA-RBF neural network
NASA Astrophysics Data System (ADS)
Li, Jiyong; Zhao, Zhendong; Li, Yisheng; Xiao, Jing; Tang, Yunfeng
2018-02-01
Aiming at the non-linearity and large inertia of temperature control in PV/T system, short-term temperature prediction of PV/T module is proposed, to make the PV/T system controller run forward according to the short-term forecasting situation to optimize control effect. Based on the analysis of the correlation between PV/T module temperature and meteorological factors, and the temperature of adjacent time series, the principal component analysis (PCA) method is used to pre-process the original input sample data. Combined with the RBF neural network theory, the simulation results show that the PCA method makes the prediction accuracy of the network model higher and the generalization performance stronger than that of the RBF neural network without the main component extraction.
Study on nondestructive discrimination of genuine and counterfeit wild ginsengs using NIRS
NASA Astrophysics Data System (ADS)
Lu, Q.; Fan, Y.; Peng, Z.; Ding, H.; Gao, H.
2012-07-01
A new approach for the nondestructive discrimination between genuine wild ginsengs and the counterfeit ones by near infrared spectroscopy (NIRS) was developed. Both discriminant analysis and back propagation artificial neural network (BP-ANN) were applied to the model establishment for discrimination. Optimal modeling wavelengths were determined based on the anomalous spectral information of counterfeit samples. Through principal component analysis (PCA) of various wild ginseng samples, genuine and counterfeit, the cumulative percentages of variance of the principal components were obtained, serving as a reference for principal component (PC) factor determination. Discriminant analysis achieved an identification ratio of 88.46%. With sample' truth values as its outputs, a three-layer BP-ANN model was built, which yielded a higher discrimination accuracy of 100%. The overall results sufficiently demonstrate that NIRS combined with BP-ANN classification algorithm performs better on ginseng discrimination than discriminant analysis, and can be used as a rapid and nondestructive method for the detection of counterfeit wild ginsengs in food and pharmaceutical industry.
Zhang, Xiaolei; Liu, Fei; He, Yong; Li, Xiaoli
2012-01-01
Hyperspectral imaging in the visible and near infrared (VIS-NIR) region was used to develop a novel method for discriminating different varieties of commodity maize seeds. Firstly, hyperspectral images of 330 samples of six varieties of maize seeds were acquired using a hyperspectral imaging system in the 380–1,030 nm wavelength range. Secondly, principal component analysis (PCA) and kernel principal component analysis (KPCA) were used to explore the internal structure of the spectral data. Thirdly, three optimal wavelengths (523, 579 and 863 nm) were selected by implementing PCA directly on each image. Then four textural variables including contrast, homogeneity, energy and correlation were extracted from gray level co-occurrence matrix (GLCM) of each monochromatic image based on the optimal wavelengths. Finally, several models for maize seeds identification were established by least squares-support vector machine (LS-SVM) and back propagation neural network (BPNN) using four different combinations of principal components (PCs), kernel principal components (KPCs) and textural features as input variables, respectively. The recognition accuracy achieved in the PCA-GLCM-LS-SVM model (98.89%) was the most satisfactory one. We conclude that hyperspectral imaging combined with texture analysis can be implemented for fast classification of different varieties of maize seeds. PMID:23235456
Exploratory Application of Neuropharmacometabolomics in Severe Childhood Traumatic Brain Injury.
Hagos, Fanuel T; Empey, Philip E; Wang, Pengcheng; Ma, Xiaochao; Poloyac, Samuel M; Bayır, Hülya; Kochanek, Patrick M; Bell, Michael J; Clark, Robert S B
2018-05-07
To employ metabolomics-based pathway and network analyses to evaluate the cerebrospinal fluid metabolome after severe traumatic brain injury in children and the capacity of combination therapy with probenecid and N-acetylcysteine to impact glutathione-related and other pathways and networks, relative to placebo treatment. Analysis of cerebrospinal fluid obtained from children enrolled in an Institutional Review Board-approved, randomized, placebo-controlled trial of a combination of probenecid and N-acetylcysteine after severe traumatic brain injury (Trial Registration NCT01322009). Thirty-six-bed PICU in a university-affiliated children's hospital. Twelve children 2-18 years old after severe traumatic brain injury and five age-matched control subjects. Probenecid (25 mg/kg) and N-acetylcysteine (140 mg/kg) or placebo administered via naso/orogastric tube. The cerebrospinal fluid metabolome was analyzed in samples from traumatic brain injury patients 24 hours after the first dose of drugs or placebo and control subjects. Feature detection, retention time, alignment, annotation, and principal component analysis and statistical analysis were conducted using XCMS-online. The software "mummichog" was used for pathway and network analyses. A two-component principal component analysis revealed clustering of each of the groups, with distinct metabolomics signatures. Several novel pathways with plausible mechanistic involvement in traumatic brain injury were identified. A combination of metabolomics and pathway/network analyses showed that seven glutathione-centered pathways and two networks were enriched in the cerebrospinal fluid of traumatic brain injury patients treated with probenecid and N-acetylcysteine versus placebo-treated patients. Several additional pathways/networks consisting of components that are known substrates of probenecid-inhibitable transporters were also identified, providing additional mechanistic validation. This proof-of-concept neuropharmacometabolomics assessment reveals alterations in known and previously unidentified metabolic pathways and supports therapeutic target engagement of the combination of probenecid and N-acetylcysteine treatment after severe traumatic brain injury in children.
Voukantsis, Dimitris; Karatzas, Kostas; Kukkonen, Jaakko; Räsänen, Teemu; Karppinen, Ari; Kolehmainen, Mikko
2011-03-01
In this paper we propose a methodology consisting of specific computational intelligence methods, i.e. principal component analysis and artificial neural networks, in order to inter-compare air quality and meteorological data, and to forecast the concentration levels for environmental parameters of interest (air pollutants). We demonstrate these methods to data monitored in the urban areas of Thessaloniki and Helsinki in Greece and Finland, respectively. For this purpose, we applied the principal component analysis method in order to inter-compare the patterns of air pollution in the two selected cities. Then, we proceeded with the development of air quality forecasting models for both studied areas. On this basis, we formulated and employed a novel hybrid scheme in the selection process of input variables for the forecasting models, involving a combination of linear regression and artificial neural networks (multi-layer perceptron) models. The latter ones were used for the forecasting of the daily mean concentrations of PM₁₀ and PM₂.₅ for the next day. Results demonstrated an index of agreement between measured and modelled daily averaged PM₁₀ concentrations, between 0.80 and 0.85, while the kappa index for the forecasting of the daily averaged PM₁₀ concentrations reached 60% for both cities. Compared with previous corresponding studies, these statistical parameters indicate an improved performance of air quality parameters forecasting. It was also found that the performance of the models for the forecasting of the daily mean concentrations of PM₁₀ was not substantially different for both cities, despite the major differences of the two urban environments under consideration. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Soares dos Santos, T.; Mendes, D.; Rodrigues Torres, R.
2016-01-01
Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANNs) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon; northeastern Brazil; and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model output and observed monthly precipitation. We used general circulation model (GCM) experiments for the 20th century (RCP historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANNs significantly outperform the MLR downscaling of monthly precipitation variability.
NASA Astrophysics Data System (ADS)
dos Santos, T. S.; Mendes, D.; Torres, R. R.
2015-08-01
Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANN) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon, Northeastern Brazil and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model out- put and observed monthly precipitation. We used GCMs experiments for the 20th century (RCP Historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANN significantly outperforms the MLR downscaling of monthly precipitation variability.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na
2014-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935
NASA Astrophysics Data System (ADS)
Sun, Huimin; Meng, Yaoyong; Zhang, Pingli; Li, Yajing; Li, Nan; Li, Caiyun; Guo, Zhiyou
2017-09-01
The age determination of bloodstains is an important and immediate challenge for forensic science. No reliable methods are currently available for estimating the age of bloodstains. Here we report a method for determining the age of bloodstains at different storage temperatures. Bloodstains were stored at 37 °C, 25 °C, 4 °C, and -20 °C for 80 d. Bloodstains were measured using Raman spectroscopy at various time points. The principal component and a back propagation artificial neural network model were then established for estimating the age of the bloodstains. The results were ideal; the square of correlation coefficient was up to 0.99 (R 2 > 0.99) and the root mean square error of the prediction at lowest reached 55.9829 h. This method is real-time, non-invasive, non-destructive and highly efficiency. It may well prove that Raman spectroscopy is a promising tool for the estimation of the age of bloodstains.
Liu, Ming; Zhao, Jing; Lu, XiaoZuo; Li, Gang; Wu, Taixia; Zhang, LiFu
2018-05-10
With spectral methods, noninvasive determination of blood hyperviscosity in vivo is very potential and meaningful in clinical diagnosis. In this study, 67 male subjects (41 health, and 26 hyperviscosity according to blood sample analysis results) participate. Reflectance spectra of subjects' tongue tips is measured, and a classification method bases on principal component analysis combined with artificial neural network model is built to identify hyperviscosity. Hold-out and Leave-one-out methods are used to avoid significant bias and lessen overfitting problem, which are widely accepted in the model validation. To measure the performance of the classification, sensitivity, specificity, accuracy and F-measure are calculated, respectively. The accuracies with 100 times Hold-out method and 67 times Leave-one-out method are 88.05% and 97.01%, respectively. Experimental results indicate that the built classification model has certain practical value and proves the feasibility of using spectroscopy to identify hyperviscosity by noninvasive determination.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold
NASA Astrophysics Data System (ADS)
Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong
2010-03-01
The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.
Sand/cement ratio evaluation on mortar using neural networks and ultrasonic transmission inspection.
Molero, M; Segura, I; Izquierdo, M A G; Fuente, J V; Anaya, J J
2009-02-01
The quality and degradation state of building materials can be determined by nondestructive testing (NDT). These materials are composed of a cementitious matrix and particles or fragments of aggregates. Sand/cement ratio (s/c) provides the final material quality; however, the sand content can mask the matrix properties in a nondestructive measurement. Therefore, s/c ratio estimation is needed in nondestructive characterization of cementitious materials. In this study, a methodology to classify the sand content in mortar is presented. The methodology is based on ultrasonic transmission inspection, data reduction, and features extraction by principal components analysis (PCA), and neural network classification. This evaluation is carried out with several mortar samples, which were made while taking into account different cement types and s/c ratios. The estimated s/c ratio is determined by ultrasonic spectral attenuation with three different broadband transducers (0.5, 1, and 2 MHz). Statistical PCA to reduce the dimension of the captured traces has been applied. Feed-forward neural networks (NNs) are trained using principal components (PCs) and their outputs are used to display the estimated s/c ratios in false color images, showing the s/c ratio distribution of the mortar samples.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Using Neural Networks for Sensor Validation
NASA Technical Reports Server (NTRS)
Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William
1998-01-01
This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.
SELF-ORGANIZING MAPS FOR INTEGRATED ASSESSMENT OF THE MID-ATLANTIC REGION
A. new method was developed to perform an environmental assessment for the
Mid-Atlantic Region (MAR). This was a combination of the self-organizing map (SOM) neural network and principal component analysis (PCA). The method is capable of clustering ecosystems in terms of envi...
Nariai, N; Kim, S; Imoto, S; Miyano, S
2004-01-01
We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.
Wigman, J T W; van Os, J; Borsboom, D; Wardenaar, K J; Epskamp, S; Klippel, A; Viechtbauer, W; Myin-Germeys, I; Wichers, M
2015-08-01
It has been suggested that the structure of psychopathology is best described as a complex network of components that interact in dynamic ways. The goal of the present paper was to examine the concept of psychopathology from a network perspective, combining complementary top-down and bottom-up approaches using momentary assessment techniques. A pooled Experience Sampling Method (ESM) dataset of three groups (individuals with a diagnosis of depression, psychotic disorder or no diagnosis) was used (pooled N = 599). The top-down approach explored the network structure of mental states across different diagnostic categories. For this purpose, networks of five momentary mental states ('cheerful', 'content', 'down', 'insecure' and 'suspicious') were compared between the three groups. The complementary bottom-up approach used principal component analysis to explore whether empirically derived network structures yield meaningful higher order clusters. Individuals with a clinical diagnosis had more strongly connected moment-to-moment network structures, especially the depressed group. This group also showed more interconnections specifically between positive and negative mental states than the psychotic group. In the bottom-up approach, all possible connections between mental states were clustered into seven main components that together captured the main characteristics of the network dynamics. Our combination of (i) comparing network structure of mental states across three diagnostically different groups and (ii) searching for trans-diagnostic network components across all pooled individuals showed that these two approaches yield different, complementary perspectives in the field of psychopathology. The network paradigm therefore may be useful to map transdiagnostic processes.
E-nose based rapid prediction of early mouldy grain using probabilistic neural networks
Ying, Xiaoguo; Liu, Wei; Hui, Guohua; Fu, Jun
2015-01-01
In this paper, early mouldy grain rapid prediction method using probabilistic neural network (PNN) and electronic nose (e-nose) was studied. E-nose responses to rice, red bean, and oat samples with different qualities were measured and recorded. E-nose data was analyzed using principal component analysis (PCA), back propagation (BP) network, and PNN, respectively. Results indicated that PCA and BP network could not clearly discriminate grain samples with different mouldy status and showed poor predicting accuracy. PNN showed satisfying discriminating abilities to grain samples with an accuracy of 93.75%. E-nose combined with PNN is effective for early mouldy grain prediction. PMID:25714125
Pistolis, John; Zimeras, Stelios; Chardalias, Kostas; Roupa, Zoe; Fildisis, George; Diomidous, Marianna
2016-06-01
Social networks (1) have been embedded in our daily life for a long time. They constitute a powerful tool used nowadays for both searching and exchanging information on different issues by using Internet searching engines (Google, Bing, etc.) and Social Networks (Facebook, Twitter etc.). In this paper, are presented the results of a research based on the frequency and the type of the usage of the Internet and the Social Networks by the general public and the health professionals. The objectives of the research were focused on the investigation of the frequency of seeking and meticulously searching for health information in the social media by both individuals and health practitioners. The exchanging of information is a procedure that involves the issues of reliability and quality of information. In this research, by using advanced statistical techniques an effort is made to investigate the participant's profile in using social networks for searching and exchanging information on health issues. Based on the answers 93 % of the people, use the Internet to find information on health-subjects. Considering principal component analysis, the most important health subjects were nutrition (0.719 %), respiratory issues (0.79 %), cardiological issues (0.777%), psychological issues (0.667%) and total (73.8%). The research results, based on different statistical techniques revealed that the 61.2% of the males and 56.4% of the females intended to use the social networks for searching medical information. Based on the principal components analysis, the most important sources that the participants mentioned, were the use of the Internet and social networks for exchanging information on health issues. These sources proved to be of paramount importance to the participants of the study. The same holds for nursing, medical and administrative staff in hospitals.
Comparing Networks from a Data Analysis Perspective
NASA Astrophysics Data System (ADS)
Li, Wei; Yang, Jing-Yu
To probe network characteristics, two predominant ways of network comparison are global property statistics and subgraph enumeration. However, they suffer from limited information and exhaustible computing. Here, we present an approach to compare networks from the perspective of data analysis. Initially, the approach projects each node of original network as a high-dimensional data point, and the network is seen as clouds of data points. Then the dispersion information of the principal component analysis (PCA) projection of the generated data clouds can be used to distinguish networks. We applied this node projection method to the yeast protein-protein interaction networks and the Internet Autonomous System networks, two types of networks with several similar higher properties. The method can efficiently distinguish one from the other. The identical result of different datasets from independent sources also indicated that the method is a robust and universal framework.
NASA Astrophysics Data System (ADS)
Anwar, Muhammad Ayaz; Choi, Sangdun
2017-03-01
Toll-like receptor 4 (TLR4), a vital innate immune receptor present on cell surfaces, initiates a signaling cascade during danger and bacterial intrusion. TLR4 needs to form a stable hexamer complex, which is necessary to dimerize the cytoplasmic domain. However, D299G and T399I polymorphism may abrogate the stability of the complex, leading to compromised TLR4 signaling. Crystallography provides valuable insights into the structural aspects of the TLR4 ectodomain; however, the dynamic behavior of polymorphic TLR4 is still unclear. Here, we employed molecular dynamics simulations (MDS), as well as principal component and residue network analyses, to decipher the structural aspects and signaling propagation associated with mutations in TLR4. The mutated complexes were less cohesive, displayed local and global variation in the secondary structure, and anomalous decay in rotational correlation function. Principal component analysis indicated that the mutated complexes also exhibited distinct low-frequency motions, which may be correlated to the differential behaviors of these TLR4 variants. Moreover, residue interaction networks (RIN) revealed that the mutated TLR4/myeloid differentiation factor (MD) 2 complex may perpetuate abnormal signaling pathways. Cumulatively, the MDS and RIN analyses elucidated the mutant-specific conformational alterations, which may help in deciphering the mechanism of loss-of-function mutations.
The biometric-based module of smart grid system
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Ermoshkina, A.
2015-10-01
Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.
ERIC Educational Resources Information Center
National Library of Canada, Ottawa (Ontario).
A pilot project was conducted from May 1980 to November 1983 to test the application of iNet--a decentralized, packet-switched telecommunications network--to bibliographic data interchange in Canada. The principal components of the project were participation of the Bibliographic Common Interest Group (BCIP), a group of libraries with stand-alone,…
Classifying U.S. Army Military Occupational Specialties Using the Occupational Information Network
Gadermann, Anne M.; Heeringa, Steven G.; Stein, Murray B.; Ursano, Robert J.; Colpe, Lisa J.; Fullerton, Carol S.; Gilman, Stephen E.; Gruber, Michael J.; Nock, Matthew K.; Rosellini, Anthony J.; Sampson, Nancy A.; Schoenbaum, Michael; Zaslavsky, Alan M.; Kessler, Ronald C.
2016-01-01
Objectives To derive job condition scales for future studies of the effects of job conditions on soldier health and job functioning across Army Military Occupation Specialties (MOSs) and Areas of Concentration (AOCs) using Department of Labor (DoL) Occupational Information Network (O*NET) ratings. Methods A consolidated administrative dataset was created for the “Army Study to Assess Risk and Resilience in Servicemembers” (Army STARRS) containing all soldiers on active duty between 2004 and 2009. A crosswalk between civilian occupations and MOS/AOCs (created by DoL and the Defense Manpower Data Center) was augmented to assign scores on all 246 O*NET dimensions to each soldier in the dataset. Principal components analysis was used to summarize these dimensions. Results Three correlated components explained the majority of O*NET dimension variance: “physical demands” (20.9% of variance), “interpersonal complexity” (17.5%), and “substantive complexity” (15.0%). Although broadly consistent with civilian studies, several discrepancies were found with civilian results reflecting potentially important differences in the structure of job conditions in the Army versus the civilian labor force. Conclusions Principal components scores for these scales provide a parsimonious characterization of key job conditions that can be used in future studies of the effects of MOS/AOC job conditions on diverse outcomes. PMID:25003860
Classifying U.S. Army Military Occupational Specialties using the Occupational Information Network.
Gadermann, Anne M; Heeringa, Steven G; Stein, Murray B; Ursano, Robert J; Colpe, Lisa J; Fullerton, Carol S; Gilman, Stephen E; Gruber, Michael J; Nock, Matthew K; Rosellini, Anthony J; Sampson, Nancy A; Schoenbaum, Michael; Zaslavsky, Alan M; Kessler, Ronald C
2014-07-01
To derive job condition scales for future studies of the effects of job conditions on soldier health and job functioning across Army Military Occupation Specialties (MOSs) and Areas of Concentration (AOCs) using Department of Labor (DoL) Occupational Information Network (O*NET) ratings. A consolidated administrative dataset was created for the "Army Study to Assess Risk and Resilience in Servicemembers" (Army STARRS) containing all soldiers on active duty between 2004 and 2009. A crosswalk between civilian occupations and MOS/AOCs (created by DoL and the Defense Manpower Data Center) was augmented to assign scores on all 246 O*NET dimensions to each soldier in the dataset. Principal components analysis was used to summarize these dimensions. Three correlated components explained the majority of O*NET dimension variance: "physical demands" (20.9% of variance), "interpersonal complexity" (17.5%), and "substantive complexity" (15.0%). Although broadly consistent with civilian studies, several discrepancies were found with civilian results reflecting potentially important differences in the structure of job conditions in the Army versus the civilian labor force. Principal components scores for these scales provide a parsimonious characterization of key job conditions that can be used in future studies of the effects of MOS/AOC job conditions on diverse outcomes. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Patterns of Twitter Behavior Among Networks of Cannabis Dispensaries in California
Chew, Robert F; Hsieh, Yuli P; Bieler, Gayle S; Bobashev, Georgiy V; Siege, Christopher; Zarkin, Gary A
2017-01-01
Background Twitter represents a social media platform through which medical cannabis dispensaries can rapidly promote and advertise a multitude of retail products. Yet, to date, no studies have systematically evaluated Twitter behavior among dispensaries and how these behaviors influence the formation of social networks. Objectives This study sought to characterize common cyberbehaviors and shared follower networks among dispensaries operating in two large cannabis markets in California. Methods From a targeted sample of 119 dispensaries in the San Francisco Bay Area and Greater Los Angeles, we collected metadata from the dispensary accounts using the Twitter API. For each city, we characterized the network structure of dispensaries based upon shared followers, then empirically derived communities with the Louvain modularity algorithm. Principal components factor analysis was employed to reduce 12 Twitter measures into a more parsimonious set of cyberbehavioral dimensions. Finally, quadratic discriminant analysis was implemented to verify the ability of the extracted dimensions to classify dispensaries into their derived communities. Results The modularity algorithm yielded three communities in each city with distinct network structures. The principal components factor analysis reduced the 12 cyberbehaviors into five dimensions that encompassed account age, posting frequency, referencing, hyperlinks, and user engagement among the dispensary accounts. In the quadratic discriminant analysis, the dimensions correctly classified 75% (46/61) of the communities in the San Francisco Bay Area and 71% (41/58) in Greater Los Angeles. Conclusions The most centralized and strongly connected dispensaries in both cities had newer accounts, higher daily activity, more frequent user engagement, and increased usage of embedded media, keywords, and hyperlinks. Measures derived from both network structure and cyberbehavioral dimensions can serve as key contextual indicators for the online surveillance of cannabis dispensaries and consumer markets over time. PMID:28676471
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larour, Jean; Aranchuk, Leonid E.; Danisman, Yusuf
2016-03-15
Principal component analysis is applied and compared with the line ratios of special Ne-like transitions for investigating the electron beam effects on the L-shell Cu synthetic spectra. The database for the principal component extraction is created over a non Local Thermodynamic Equilibrium (non-LTE) collisional radiative L-shell Copper model. The extracted principal components are used as a database for Artificial Neural Network in order to estimate the plasma electron temperature, density, and beam fractions from a representative time-integrated spatially resolved L-shell Cu X-pinch plasma spectrum. The spectrum is produced by the explosion of 25-μm Cu wires on a compact LC (40more » kV, 200 kA, and 200 ns) generator. The modeled plasma electron temperatures are about T{sub e} ∼ 150 eV and N{sub e} = 5 × 10{sup 19} cm{sup −3} in the presence of the fraction of the beams with f ∼ 0.05 and a centered energy of ∼10 keV.« less
Li, Ziyi; Safo, Sandra E; Long, Qi
2017-07-11
Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.
Antiqueira, Lucas; Janga, Sarath Chandra; Costa, Luciano da Fontoura
2012-11-01
To understand the regulatory dynamics of transcription factors (TFs) and their interplay with other cellular components we have integrated transcriptional, protein-protein and the allosteric or equivalent interactions which mediate the physiological activity of TFs in Escherichia coli. To study this integrated network we computed a set of network measurements followed by principal component analysis (PCA), investigated the correlations between network structure and dynamics, and carried out a procedure for motif detection. In particular, we show that outliers identified in the integrated network based on their network properties correspond to previously characterized global transcriptional regulators. Furthermore, outliers are highly and widely expressed across conditions, thus supporting their global nature in controlling many genes in the cell. Motifs revealed that TFs not only interact physically with each other but also obtain feedback from signals delivered by signaling proteins supporting the extensive cross-talk between different types of networks. Our analysis can lead to the development of a general framework for detecting and understanding global regulatory factors in regulatory networks and reinforces the importance of integrating multiple types of interactions in underpinning the interrelationships between them.
The spatial and temporal variability of ambient air concentrations of SO2, SO42-, NO3
Principal elementary mode analysis (PEMA).
Folch-Fortuny, Abel; Marques, Rodolfo; Isidro, Inês A; Oliveira, Rui; Ferrer, Alberto
2016-03-01
Principal component analysis (PCA) has been widely applied in fluxomics to compress data into a few latent structures in order to simplify the identification of metabolic patterns. These latent structures lack a direct biological interpretation due to the intrinsic constraints associated with a PCA model. Here we introduce a new method that significantly improves the interpretability of the principal components with a direct link to metabolic pathways. This method, called principal elementary mode analysis (PEMA), establishes a bridge between a PCA-like model, aimed at explaining the maximum variance in flux data, and the set of elementary modes (EMs) of a metabolic network. It provides an easy way to identify metabolic patterns in large fluxomics datasets in terms of the simplest pathways of the organism metabolism. The results using a real metabolic model of Escherichia coli show the ability of PEMA to identify the EMs that generated the different simulated flux distributions. Actual flux data of E. coli and Pichia pastoris cultures confirm the results observed in the simulated study, providing a biologically meaningful model to explain flux data of both organisms in terms of the EM activation. The PEMA toolbox is freely available for non-commercial purposes on http://mseg.webs.upv.es.
Unsupervised learning in general connectionist systems.
Dente, J A; Mendes, R Vilela
1996-01-01
There is a common framework in which different connectionist systems may be treated in a unified way. The general system in which they may all be mapped is a network which, in addition to the connection strengths, has an adaptive node parameter controlling the output intensity. In this paper we generalize two neural network learning schemes to networks with node parameters. In generalized Hebbian learning we find improvements to the convergence rate for small eigenvalues in principal component analysis. For competitive learning the use of node parameters also seems useful in that, by emphasizing or de-emphasizing the dominance of winning neurons, either improved robustness or discrimination is obtained.
Construction and comparison of gene co-expression networks shows complex plant immune responses
López, Camilo; López-Kleine, Liliana
2014-01-01
Gene co-expression networks (GCNs) are graphic representations that depict the coordinated transcription of genes in response to certain stimuli. GCNs provide functional annotations of genes whose function is unknown and are further used in studies of translational functional genomics among species. In this work, a methodology for the reconstruction and comparison of GCNs is presented. This approach was applied using gene expression data that were obtained from immunity experiments in Arabidopsis thaliana, rice, soybean, tomato and cassava. After the evaluation of diverse similarity metrics for the GCN reconstruction, we recommended the mutual information coefficient measurement and a clustering coefficient-based method for similarity threshold selection. To compare GCNs, we proposed a multivariate approach based on the Principal Component Analysis (PCA). Branches of plant immunity that were exemplified by each experiment were analyzed in conjunction with the PCA results, suggesting both the robustness and the dynamic nature of the cellular responses. The dynamic of molecular plant responses produced networks with different characteristics that are differentiable using our methodology. The comparison of GCNs from plant pathosystems, showed that in response to similar pathogens plants could activate conserved signaling pathways. The results confirmed that the closeness of GCNs projected on the principal component space is an indicative of similarity among GCNs. This also can be used to understand global patterns of events triggered during plant immune responses. PMID:25320678
Li, Cheng-Wei; Wang, Wen-Hsin; Chen, Bor-Sen
2016-01-01
Aging is an inevitable part of life for humans, and slowing down the aging process has become a main focus of human endeavor. Here, we applied a systems biology approach to construct protein-protein interaction networks, gene regulatory networks, and epigenetic networks, i.e. genetic and epigenetic networks (GENs), of elderly individuals and young controls. We then compared these GENs to extract aging mechanisms using microarray data in peripheral blood mononuclear cells, microRNA (miRNA) data, and database mining. The core GENs of elderly individuals and young controls were obtained by applying principal network projection to GENs based on Principal Component Analysis. By comparing the core networks, we identified that to overcome the accumulated mutation of genes in the aging process the transcription factor JUN can be activated by stress signals, including the MAPK signaling, T-cell receptor signaling, and neurotrophin signaling pathways through DNA methylation of BTG3, G0S2, and AP2B1 and the regulations of mir-223 let-7d, and mir-130a. We also address the aging mechanisms in old men and women. Furthermore, we proposed that drugs designed to target these DNA methylated genes or miRNAs may delay aging. A multiple drug combination comprising phenylalanine, cholesterol, and palbociclib was finally designed for delaying the aging process. PMID:26895224
The Global Oscillation Network Group site survey, 2: Results
NASA Technical Reports Server (NTRS)
Hill, Frank; Fischer, George; Forgach, Suzanne; Grier, Jennifer; Leibacher, John W.; Jones, Harrison P.; Jones, Patricia B.; Kupke, Renate; Stebbins, Robin T.; Clay, Donald W.
1994-01-01
The Global Oscillation Network Group (GONG) Project will place a network of instruments around the world to observe solar oscillations as continuously as possible for three years. The Project has now chosen the six network sites based on analysis of survey data from fifteen sites around the world. The chosen sites are: Big Bear Solar Observatory, California; Mauna Loa Solar Observatory, Hawaii; Learmonth Solar Observatory, Australia; Udaipur Solar Observatory, India; Observatorio del Teide, Tenerife; and Cerro Tololo Interamerican Observatory, Chile. Total solar intensity at each site yields information on local cloud cover, extinction coefficient, and transparency fluctuations. In addition, the performance of 192 reasonable networks assembled from the individual site records is compared using a statistical principal components analysis. An accompanying paper descibes the analysis methods in detail; here we present the results of both the network and individual site analyses. The selected network has a duty cycle of 93.3%, in good agreement with numerical simulations. The power spectrum of the network observing window shows a first diurnal sidelobe height of 3 x 10(exp -4) with respect to the central component, an improvement of a factor of 1300 over a single site. The background level of the network spectrum is lower by a factor of 50 compared to a single-site spectrum.
Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
Zachery A. Holden; Michael A. Crimmins; Samuel A. Cushman; Jeremy S. Littell
2010-01-01
Accurate, fine spatial resolution predictions of surface air temperatures are critical for understanding many hydrologic and ecological processes. This study examines the spatial and temporal variability in nocturnal air temperatures across a mountainous region of Northern Idaho. Principal components analysis (PCA) was applied to a network of 70 Hobo temperature...
Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito
2016-01-01
Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474
Differentiation of tea varieties using UV-Vis spectra and pattern recognition techniques
NASA Astrophysics Data System (ADS)
Palacios-Morillo, Ana; Alcázar, Ángela.; de Pablos, Fernando; Jurado, José Marcos
2013-02-01
Tea, one of the most consumed beverages all over the world, is of great importance in the economies of a number of countries. Several methods have been developed to classify tea varieties or origins based in pattern recognition techniques applied to chemical data, such as metal profile, amino acids, catechins and volatile compounds. Some of these analytical methods become tedious and expensive to be applied in routine works. The use of UV-Vis spectral data as discriminant variables, highly influenced by the chemical composition, can be an alternative to these methods. UV-Vis spectra of methanol-water extracts of tea have been obtained in the interval 250-800 nm. Absorbances have been used as input variables. Principal component analysis was used to reduce the number of variables and several pattern recognition methods, such as linear discriminant analysis, support vector machines and artificial neural networks, have been applied in order to differentiate the most common tea varieties. A successful classification model was built by combining principal component analysis and multilayer perceptron artificial neural networks, allowing the differentiation between tea varieties. This rapid and simple methodology can be applied to solve classification problems in food industry saving economic resources.
Li, Hong Zhi; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min
2011-01-01
We propose a generalized regression neural network (GRNN) approach based on grey relational analysis (GRA) and principal component analysis (PCA) (GP-GRNN) to improve the accuracy of density functional theory (DFT) calculation for homolysis bond dissociation energies (BDE) of Y-NO bond. As a demonstration, this combined quantum chemistry calculation with the GP-GRNN approach has been applied to evaluate the homolysis BDE of 92 Y-NO organic molecules. The results show that the ull-descriptor GRNN without GRA and PCA (F-GRNN) and with GRA (G-GRNN) approaches reduce the root-mean-square (RMS) of the calculated homolysis BDE of 92 organic molecules from 5.31 to 0.49 and 0.39 kcal mol(-1) for the B3LYP/6-31G (d) calculation. Then the newly developed GP-GRNN approach further reduces the RMS to 0.31 kcal mol(-1). Thus, the GP-GRNN correction on top of B3LYP/6-31G (d) can improve the accuracy of calculating the homolysis BDE in quantum chemistry and can predict homolysis BDE which cannot be obtained experimentally.
Akama, Hiroyuki; Miyake, Maki; Jung, Jaeyoung; Murphy, Brian
2015-01-01
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
Feature extraction for ultrasonic sensor based defect detection in ceramic components
NASA Astrophysics Data System (ADS)
Kesharaju, Manasa; Nagarajah, Romesh
2014-02-01
High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.
Quantifying Individual Brain Connectivity with Functional Principal Component Analysis for Networks.
Petersen, Alexander; Zhao, Jianyang; Carmichael, Owen; Müller, Hans-Georg
2016-09-01
In typical functional connectivity studies, connections between voxels or regions in the brain are represented as edges in a network. Networks for different subjects are constructed at a given graph density and are summarized by some network measure such as path length. Examining these summary measures for many density values yields samples of connectivity curves, one for each individual. This has led to the adoption of basic tools of functional data analysis, most commonly to compare control and disease groups through the average curves in each group. Such group differences, however, neglect the variability in the sample of connectivity curves. In this article, the use of functional principal component analysis (FPCA) is demonstrated to enrich functional connectivity studies by providing increased power and flexibility for statistical inference. Specifically, individual connectivity curves are related to individual characteristics such as age and measures of cognitive function, thus providing a tool to relate brain connectivity with these variables at the individual level. This individual level analysis opens a new perspective that goes beyond previous group level comparisons. Using a large data set of resting-state functional magnetic resonance imaging scans, relationships between connectivity and two measures of cognitive function-episodic memory and executive function-were investigated. The group-based approach was implemented by dichotomizing the continuous cognitive variable and testing for group differences, resulting in no statistically significant findings. To demonstrate the new approach, FPCA was implemented, followed by linear regression models with cognitive scores as responses, identifying significant associations of connectivity in the right middle temporal region with both cognitive scores.
NASA Astrophysics Data System (ADS)
Lipovsky, B.; Funning, G. J.
2009-12-01
We compare several techniques for the analysis of geodetic time series with the ultimate aim to characterize the physical processes which are represented therein. We compare three methods for the analysis of these data: Principal Component Analysis (PCA), Non-Linear PCA (NLPCA), and Rotated PCA (RPCA). We evaluate each method by its ability to isolate signals which may be any combination of low amplitude (near noise level), temporally transient, unaccompanied by seismic emissions, and small scale with respect to the spatial domain. PCA is a powerful tool for extracting structure from large datasets which is traditionally realized through either the solution of an eigenvalue problem or through iterative methods. PCA is an transformation of the coordinate system of our data such that the new "principal" data axes retain maximal variance and minimal reconstruction error (Pearson, 1901; Hotelling, 1933). RPCA is achieved by an orthogonal transformation of the principal axes determined in PCA. In the analysis of meteorological data sets, RPCA has been seen to overcome domain shape dependencies, correct for sampling errors, and to determine principal axes which more closely represent physical processes (e.g., Richman, 1986). NLPCA generalizes PCA such that principal axes are replaced by principal curves (e.g., Hsieh 2004). We achieve NLPCA through an auto-associative feed-forward neural network (Scholz, 2005). We show the geophysical relevance of these techniques by application of each to a synthetic data set. Results are compared by inverting principal axes to determine deformation source parameters. Temporal variability in source parameters, estimated by each method, are also compared.
Developing Principal Instructional Leadership through Collaborative Networking
ERIC Educational Resources Information Center
Cone, Mariah Bahar
2010-01-01
This study examines what occurs when principals of urban schools meet together to learn and improve their instructional leadership in collaborative principal networks designed to support, sustain, and provide ongoing principal capacity building. Principal leadership is considered second only to teaching in its ability to improve schools, yet few…
Distributions of experimental protein structures on coarse-grained free energy landscapes
Liu, Jie; Jernigan, Robert L.
2015-01-01
Predicting conformational changes of proteins is needed in order to fully comprehend functional mechanisms. With the large number of available structures in sets of related proteins, it is now possible to directly visualize the clusters of conformations and their conformational transitions through the use of principal component analysis. The most striking observation about the distributions of the structures along the principal components is their highly non-uniform distributions. In this work, we use principal component analysis of experimental structures of 50 diverse proteins to extract the most important directions of their motions, sample structures along these directions, and estimate their free energy landscapes by combining knowledge-based potentials and entropy computed from elastic network models. When these resulting motions are visualized upon their coarse-grained free energy landscapes, the basis for conformational pathways becomes readily apparent. Using three well-studied proteins, T4 lysozyme, serum albumin, and sarco-endoplasmic reticular Ca2+ adenosine triphosphatase (SERCA), as examples, we show that such free energy landscapes of conformational changes provide meaningful insights into the functional dynamics and suggest transition pathways between different conformational states. As a further example, we also show that Monte Carlo simulations on the coarse-grained landscape of HIV-1 protease can directly yield pathways for force-driven conformational changes. PMID:26723638
ERIC Educational Resources Information Center
Moolenaar, Nienke M.; Sleegers, Peter J. C.
2015-01-01
Purpose: While in everyday practice, school leaders are often involved in social relationships with a variety of stakeholders both within and outside their own schools, studies on school leaders' networks often focus either on networks within or outside schools. The purpose of this paper is to investigate the extent to which principals occupy…
Determination of butter adulteration with margarine using Raman spectroscopy.
Uysal, Reyhan Selin; Boyaci, Ismail Hakki; Genis, Hüseyin Efe; Tamer, Ugur
2013-12-15
In this study, adulteration of butter with margarine was analysed using Raman spectroscopy combined with chemometric methods (principal component analysis (PCA), principal component regression (PCR), partial least squares (PLS)) and artificial neural networks (ANNs). Different butter and margarine samples were mixed at various concentrations ranging from 0% to 100% w/w. PCA analysis was applied for the classification of butters, margarines and mixtures. PCR, PLS and ANN were used for the detection of adulteration ratios of butter. Models were created using a calibration data set and developed models were evaluated using a validation data set. The coefficient of determination (R(2)) values between actual and predicted values obtained for PCR, PLS and ANN for the validation data set were 0.968, 0.987 and 0.978, respectively. In conclusion, a combination of Raman spectroscopy with chemometrics and ANN methods can be applied for testing butter adulteration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Undergraduate nursing assistant employment in aged care has benefits for new graduates.
Algoso, Maricris; Ramjan, Lucie; East, Leah; Peters, Kath
2018-04-20
To determine how undergraduate assistant in nursing employment in aged care helps to prepare new graduates for clinical work as a registered nurse. The amount and quality of clinical experience afforded by university programs has been the subject of constant debate in the nursing profession. New graduate nurses are often deemed inadequately prepared for clinical practice and so many nursing students seek employment as assistants in nursing whilst studying to increase their clinical experience. This paper presents the first phase of a larger mixed-methods study to explore whether undergraduate assistant in nursing employment in aged care prepares new graduate nurses for the clinical work environment. The first phase involved the collection of quantitative data from a modified Preparation for Clinical Practice survey, which contained 50-scaled items relating to nursing practice. Ethics approval was obtained prior to commencing data collection. New graduate nurses who were previously employed as assistants in nursing in aged care and had at least 3 months' experience as a registered nurse, were invited to complete the survey. Social media and professional networks were used to distribute the survey between March 2015 and May 2016 and again in January 2017 - February 2017. Purposeful and snowballing sampling methods using social media and nursing networks were used to collect survey responses. Data were analysed using principal components analysis. 110 completed surveys were returned. Principal components analysis revealed four underlying constructs (components) of undergraduate assistant in nursing employment in aged care. These were emotional literacy (component 1), clinical skills (component 2), managing complex patient care (component 3) and health promotion (component 4). The 4 extracted components reflect the development of core nursing skills that transcend that of technical skills and includes the ability to situate oneself as a nurse in the care of an individual and in a healthcare team. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The Global Oscillation Network Group site survey. 1: Data collection and analysis methods
NASA Technical Reports Server (NTRS)
Hill, Frank; Fischer, George; Grier, Jennifer; Leibacher, John W.; Jones, Harrison B.; Jones, Patricia P.; Kupke, Renate; Stebbins, Robin T.
1994-01-01
The Global Oscillation Network Group (GONG) Project is planning to place a set of instruments around the world to observe solar oscillations as continuously as possible for at least three years. The Project has now chosen the sites that will comprise the network. This paper describes the methods of data collection and analysis that were used to make this decision. Solar irradiance data were collected with a one-minute cadence at fifteen sites around the world and analyzed to produce statistics of cloud cover, atmospheric extinction, and transparency power spectra at the individual sites. Nearly 200 reasonable six-site networks were assembled from the individual stations, and a set of statistical measures of the performance of the networks was analyzed using a principal component analysis. An accompanying paper presents the results of the survey.
Diagnostic analysis of liver B ultrasonic texture features based on LM neural network
NASA Astrophysics Data System (ADS)
Chi, Qingyun; Hua, Hu; Liu, Menglin; Jiang, Xiuying
2017-03-01
In this study, B ultrasound images of 124 benign and malignant patients were randomly selected as the study objects. The B ultrasound images of the liver were treated by enhanced de-noising. By constructing the gray level co-occurrence matrix which reflects the information of each angle, Principal Component Analysis of 22 texture features were extracted and combined with LM neural network for diagnosis and classification. Experimental results show that this method is a rapid and effective diagnostic method for liver imaging, which provides a quantitative basis for clinical diagnosis of liver diseases.
NASA Astrophysics Data System (ADS)
Li, Ning; Wang, Yan; Xu, Kexin
2006-08-01
Combined with Fourier transform infrared (FTIR) spectroscopy and three kinds of pattern recognition techniques, 53 traditional Chinese medicine danshen samples were rapidly discriminated according to geographical origins. The results showed that it was feasible to discriminate using FTIR spectroscopy ascertained by principal component analysis (PCA). An effective model was built by employing the Soft Independent Modeling of Class Analogy (SIMCA) and PCA, and 82% of the samples were discriminated correctly. Through use of the artificial neural network (ANN)-based back propagation (BP) network, the origins of danshen were completely classified.
NASA Astrophysics Data System (ADS)
Yu, Yali; Wang, Mengxia; Lima, Dimas
2018-04-01
In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.
A protein interaction network analysis for yeast integral membrane protein.
Shi, Ming-Guang; Huang, De-Shuang; Li, Xue-Ling
2008-01-01
Although the yeast Saccharomyces cerevisiae is the best exemplified single-celled eukaryote, the vast number of protein-protein interactions of integral membrane proteins of Saccharomyces cerevisiae have not been characterized by experiments. Here, based on the kernel method of Greedy Kernel Principal Component analysis plus Linear Discriminant Analysis, we identify 300 protein-protein interactions involving 189 membrane proteins and get the outcome of a highly connected protein-protein interactions network. Furthermore, we study the global topological features of integral membrane proteins network of Saccharomyces cerevisiae. These results give the comprehensive description of protein-protein interactions of integral membrane proteins and reveal global topological and robustness of the interactome network at a system level. This work represents an important step towards a comprehensive understanding of yeast protein interactions.
Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.
1981-01-01
A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors
Sousa, Clemente Neves; Figueiredo, Maria Henriqueta; Dias, Vanessa Filipa; Teles, Paulo; Apóstolo, João Luís
2015-12-01
We developed a scale to assess the self-care behaviours developed by patients with end-stage renal disease to preserve the vascular network prior to construction of arteriovenous fistula. The possibility of creation of an arteriovenous fistula depends on the existence of an arterial and venous network in good condition, namely the size and elasticity of the vessels. It is essential to teach the person to develop self-care behaviours for the preservation of the vascular network, regardless of the modality of dialysis selected. Methodological study. The scale was developed based on clinical experience and research conducted by the researcher in the area of the vascular access for haemodialysis. The content of the scale was judged by two panels of experts for content validity. The revised version of the scale was administered to a convenience sample of 90 patients with end-stage renal disease. In the statistical analysis, we used the Cronbach's alpha, the Kaiser-Meyer-Olkin and scree plot and the principal component analysis with varimax rotation. A principal component analysis confirmed the univariate structure of the scale (KMO = 0·759, Bartlett's sphericity test-approximate χ(2) 142·201, p < 0·000). Cronbach's α is 0·831, varying between 0·711-0·879. This scale revealed properties that allow its use to assess the patients self-care behaviours regarding the preservation of the vascular network. This scale can be used to evaluate educational programmes for the development of self-care behaviours in the preservation of vascular network. This scale can identify not only the patients that are able to take care of their vascular network but also the proportion of patients who are not able to do it, that need to be educated. © 2015 John Wiley & Sons Ltd.
Patterns of Twitter Behavior Among Networks of Cannabis Dispensaries in California.
Peiper, Nicholas C; Baumgartner, Peter M; Chew, Robert F; Hsieh, Yuli P; Bieler, Gayle S; Bobashev, Georgiy V; Siege, Christopher; Zarkin, Gary A
2017-07-04
Twitter represents a social media platform through which medical cannabis dispensaries can rapidly promote and advertise a multitude of retail products. Yet, to date, no studies have systematically evaluated Twitter behavior among dispensaries and how these behaviors influence the formation of social networks. This study sought to characterize common cyberbehaviors and shared follower networks among dispensaries operating in two large cannabis markets in California. From a targeted sample of 119 dispensaries in the San Francisco Bay Area and Greater Los Angeles, we collected metadata from the dispensary accounts using the Twitter API. For each city, we characterized the network structure of dispensaries based upon shared followers, then empirically derived communities with the Louvain modularity algorithm. Principal components factor analysis was employed to reduce 12 Twitter measures into a more parsimonious set of cyberbehavioral dimensions. Finally, quadratic discriminant analysis was implemented to verify the ability of the extracted dimensions to classify dispensaries into their derived communities. The modularity algorithm yielded three communities in each city with distinct network structures. The principal components factor analysis reduced the 12 cyberbehaviors into five dimensions that encompassed account age, posting frequency, referencing, hyperlinks, and user engagement among the dispensary accounts. In the quadratic discriminant analysis, the dimensions correctly classified 75% (46/61) of the communities in the San Francisco Bay Area and 71% (41/58) in Greater Los Angeles. The most centralized and strongly connected dispensaries in both cities had newer accounts, higher daily activity, more frequent user engagement, and increased usage of embedded media, keywords, and hyperlinks. Measures derived from both network structure and cyberbehavioral dimensions can serve as key contextual indicators for the online surveillance of cannabis dispensaries and consumer markets over time. ©Nicholas C Peiper, Peter M Baumgartner, Robert F Chew, Yuli P Hsieh, Gayle S Bieler, Georgiy V Bobashev, Christopher Siege, Gary A Zarkin. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.07.2017.
Analysis of molecular pathways in pancreatic ductal adenocarcinomas with a bioinformatics approach.
Wang, Yan; Li, Yan
2015-01-01
Pancreatic ductal adenocarcinoma (PDAC) is a leading cause of cancer death worldwide. Our study aimed to reveal molecular mechanisms. Microarray data of GSE15471 (including 39 matching pairs of pancreatic tumor tissues and patient-matched normal tissues) was downloaded from Gene Expression Omnibus (GEO) database. We identified differentially expressed genes (DEGs) in PDAC tissues compared with normal tissues by limma package in R language. Then GO and KEGG pathway enrichment analyses were conducted with online DAVID. In addition, principal component analysis was performed and a protein-protein interaction network was constructed to study relationships between the DEGs through database STRING. A total of 532 DEGs were identified in the 38 PDAC tissues compared with 33 normal tissues. The results of principal component analysis of the top 20 DEGs could differentiate the PDAC tissues from normal tissues directly. In the PPI network, 8 of the 20 DEGs were all key genes of the collagen family. Additionally, FN1 (fibronectin 1) was also a hub node in the network. The genes of the collagen family as well as FN1 were significantly enriched in complement and coagulation cascades, ECM-receptor interaction and focal adhesion pathways. Our results suggest that genes of collagen family and FN1 may play an important role in PDAC progression. Meanwhile, these DEGs and enriched pathways, such as complement and coagulation cascades, ECM-receptor interaction and focal adhesion may be important molecular mechanisms involved in the development and progression of PDAC.
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
What’s Wrong with the Murals at the Mogao Grottoes: A Near-Infrared Hyperspectral Imaging Method
Sun, Meijun; Zhang, Dong; Wang, Zheng; Ren, Jinchang; Chai, Bolong; Sun, Jizhou
2015-01-01
Although a significant amount of work has been performed to preserve the ancient murals in the Mogao Grottoes by Dunhuang Cultural Research, non-contact methods need to be developed to effectively evaluate the degree of flaking of the murals. In this study, we propose to evaluate the flaking by automatically analyzing hyperspectral images that were scanned at the site. Murals with various degrees of flaking were scanned in the 126th cave using a near-infrared (NIR) hyperspectral camera with a spectral range of approximately 900 to 1700 nm. The regions of interest (ROIs) of the murals were manually labeled and grouped into four levels: normal, slight, moderate, and severe. The average spectral data from each ROI and its group label were used to train our classification model. To predict the degree of flaking, we adopted four algorithms: deep belief networks (DBNs), partial least squares regression (PLSR), principal component analysis with a support vector machine (PCA + SVM) and principal component analysis with an artificial neural network (PCA + ANN). The experimental results show the effectiveness of our method. In particular, better results are obtained using DBNs when the training data contain a significant amount of striping noise. PMID:26394926
Wu, Ying; Xue, Yunzhen; Xue, Zhanling
2017-01-01
Abstract The medical university students in China whose school work is relatively heavy and educational system is long are a special professional group. Many students have psychological problems more or less. So, to understand their personality characteristics will provide a scientific basis for the intervention of psychological health. We selected top 30 personality trait words according to the order of frequency. Additionally, some methods such as social network analysis (SNA) and visualization technology of mapping knowledge domain were used in this study. Among these core personality trait words Family conscious had the 3 highest centralities and possessed the largest core status and influence. From the analysis of core-peripheral structure, we can see polarized core-perpheral structure was quite obvious. From the analysis of K-plex, there were in total 588 “K-2”K-plexs. From the analysis of Principal Components, we selected the 11 principal components. This study of personality not only can prevent disease, but also provide a scientific basis for students’ psychological healthy education. In addition, we have adopted SNA to pay more attention to the relationship between personality trait words and the connection among personality dimensions. This study may provide the new ideas and methods for the research of personality structure. PMID:28906409
Overlap and distinction between measures of insight and self-stigma.
Hasson-Ohayon, Ilanit
2018-05-24
Multiple studies on insight into one's illness and self-stigma among patients with serious mental illness and their relatives have shown that these constructs are related to one another and that they affect outcome. However, a critical exploration of the items used to assess both constructs raises questions with regard to the possible overlapping and centrality of items. The current study used five different samples to explore the possible overlap and distinction between insight and self-stigma, and to identify central items, via network analyses and principal component factor analysis. Findings from the network analyses showed overlap between insight and self-stigma exist with a relatively clearer observational distinction between the constructs among the two parent samples in comparison to the patient samples. Principal component factor analysis constrained to two factors showed that a relatively high percentage of items were not loaded on either factor, and in a few datasets, several insight items were loaded on the self-stigma scale and vice versa. The author discusses implications for research and calls for rethinking the way insight is assessed. Clinical implications are also discussed in reference to central items of social isolation, future worries and stereotype endorsement among the different study groups. Copyright © 2018 Elsevier B.V. All rights reserved.
Plazas-Nossa, Leonardo; Hofer, Thomas; Gruber, Günter; Torres, Andres
2017-02-01
This work proposes a methodology for the forecasting of online water quality data provided by UV-Vis spectrometry. Therefore, a combination of principal component analysis (PCA) to reduce the dimensionality of a data set and artificial neural networks (ANNs) for forecasting purposes was used. The results obtained were compared with those obtained by using discrete Fourier transform (DFT). The proposed methodology was applied to four absorbance time series data sets composed by a total number of 5705 UV-Vis spectra. Absolute percentage errors obtained by applying the proposed PCA/ANN methodology vary between 10% and 13% for all four study sites. In general terms, the results obtained were hardly generalizable, as they appeared to be highly dependent on specific dynamics of the water system; however, some trends can be outlined. PCA/ANN methodology gives better results than PCA/DFT forecasting procedure by using a specific spectra range for the following conditions: (i) for Salitre wastewater treatment plant (WWTP) (first hour) and Graz West R05 (first 18 min), from the last part of UV range to all visible range; (ii) for Gibraltar pumping station (first 6 min) for all UV-Vis absorbance spectra; and (iii) for San Fernando WWTP (first 24 min) for all of UV range to middle part of visible range.
Wu, Ying; Xue, Yunzhen; Xue, Zhanling
2017-09-01
The medical university students in China whose school work is relatively heavy and educational system is long are a special professional group. Many students have psychological problems more or less. So, to understand their personality characteristics will provide a scientific basis for the intervention of psychological health.We selected top 30 personality trait words according to the order of frequency. Additionally, some methods such as social network analysis (SNA) and visualization technology of mapping knowledge domain were used in this study.Among these core personality trait words Family conscious had the 3 highest centralities and possessed the largest core status and influence. From the analysis of core-peripheral structure, we can see polarized core-perpheral structure was quite obvious. From the analysis of K-plex, there were in total 588 "K-2"K-plexs. From the analysis of Principal Components, we selected the 11 principal components.This study of personality not only can prevent disease, but also provide a scientific basis for students' psychological healthy education. In addition, we have adopted SNA to pay more attention to the relationship between personality trait words and the connection among personality dimensions. This study may provide the new ideas and methods for the research of personality structure.
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
Caggiano, Alessandra
2018-03-09
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.
2018-01-01
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443
Etzion, Y; Linker, R; Cogan, U; Shmulevich, I
2004-09-01
This study investigates the potential use of attenuated total reflectance spectroscopy in the mid-infrared range for determining protein concentration in raw cow milk. The determination of protein concentration is based on the characteristic absorbance of milk proteins, which includes 2 absorbance bands in the 1500 to 1700 cm(-1) range, known as the amide I and amide II bands, and absorbance in the 1060 to 1100 cm(-1) range, which is associated with phosphate groups covalently bound to casein proteins. To minimize the influence of the strong water band (centered around 1640 cm(-1)) that overlaps with the amide I and amide II bands, an optimized automatic procedure for accurate water subtraction was applied. Following water subtraction, the spectra were analyzed by 3 methods, namely simple band integration, partial least squares (PLS) and neural networks. For the neural network models, the spectra were first decomposed by principal component analysis (PCA), and the neural network inputs were the spectra principal components scores. In addition, the concentrations of 2 constituents expected to interact with the protein (i.e., fat and lactose) were also used as inputs. These approaches were tested with 235 spectra of standardized raw milk samples, corresponding to 26 protein concentrations in the 2.47 to 3.90% (weight per volume) range. The simple integration method led to very poor results, whereas PLS resulted in prediction errors of about 0.22% protein. The neural network approach led to prediction errors of 0.20% protein when based on PCA scores only, and 0.08% protein when lactose and fat concentrations were also included in the model. These results indicate the potential usefulness of Fourier transform infrared/attenuated total reflectance spectroscopy for rapid, possibly online, determination of protein concentration in raw milk.
2012-01-01
Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834
Optical system for tablet variety discrimination using visible/near-infrared spectroscopy
NASA Astrophysics Data System (ADS)
Shao, Yongni; He, Yong; Hu, Xingyue
2007-12-01
An optical system based on visible/near-infrared spectroscopy (Vis/NIRS) for variety discrimination of ginkgo (Ginkgo biloba L.) tablets was developed. This system consisted of a light source, beam splitter system, sample chamber, optical detector (diffuse reflection detector), and data collection. The tablet varieties used in the research include Da na kang, Xin bang, Tian bao ning, Yi kang, Hua na xing, Dou le, Lv yuan, Hai wang, and Ji yao. All samples (n=270) were scanned in the Vis/NIR region between 325 and 1075 nm using a spectrograph. The chemometrics method of principal component artificial neural network (PC-ANN) was used to establish discrimination models of them. In PC-ANN models, the scores of the principal components were chosen as the input nodes for the input layer of ANN, and the best discrimination rate of 91.1% was reached. Principal component analysis was also executed to select several optimal wavelengths based on loading values. Wavelengths at 481, 458, 466, 570, 1000, 662, and 400 nm were then used as the input data of stepwise multiple linear regression, the regression equation of ginkgo tablets was obtained, and the discrimination rate was researched 84.4%. The results indicated that this optical system could be applied to discriminating ginkgo (Ginkgo biloba L.) tablets, and it supplied a new method for fast ginkgo tablet variety discrimination.
Liu, Lizhen; Sun, Xiaowu; Song, Wei; Du, Chao
2018-06-01
Predicting protein complexes from protein-protein interaction (PPI) network is of great significance to recognize the structure and function of cells. A protein may interact with different proteins under different time or conditions. Existing approaches only utilize static PPI network data that may lose much temporal biological information. First, this article proposed a novel method that combines gene expression data at different time points with traditional static PPI network to construct different dynamic subnetworks. Second, to further filter out the data noise, the semantic similarity based on gene ontology is regarded as the network weight together with the principal component analysis, which is introduced to deal with the weight computing by three traditional methods. Third, after building a dynamic PPI network, a predicting protein complexes algorithm based on "core-attachment" structural feature is applied to detect complexes from each dynamic subnetworks. Finally, it is revealed from the experimental results that our method proposed in this article performs well on detecting protein complexes from dynamic weighted PPI networks.
Particle identification with neural networks using a rotational invariant moment representation
NASA Astrophysics Data System (ADS)
Sinkus, Ralph; Voss, Thomas
1997-02-01
A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.
Network visualization of conformational sampling during molecular dynamics simulation.
Ahlstrom, Logan S; Baker, Joseph Lee; Ehrlich, Kent; Campbell, Zachary T; Patel, Sunita; Vorontsov, Ivan I; Tama, Florence; Miyashita, Osamu
2013-11-01
Effective data reduction methods are necessary for uncovering the inherent conformational relationships present in large molecular dynamics (MD) trajectories. Clustering algorithms provide a means to interpret the conformational sampling of molecules during simulation by grouping trajectory snapshots into a few subgroups, or clusters, but the relationships between the individual clusters may not be readily understood. Here we show that network analysis can be used to visualize the dominant conformational states explored during simulation as well as the connectivity between them, providing a more coherent description of conformational space than traditional clustering techniques alone. We compare the results of network visualization against 11 clustering algorithms and principal component conformer plots. Several MD simulations of proteins undergoing different conformational changes demonstrate the effectiveness of networks in reaching functional conclusions. Copyright © 2013 Elsevier Inc. All rights reserved.
Mohamadi Monavar, H; Afseth, N K; Lozano, J; Alimardani, R; Omid, M; Wold, J P
2013-07-15
The purpose of this study was to evaluate the feasibility of Raman spectroscopy for predicting purity of caviars. The 93 wild caviar samples of three different types, namely; Beluga, Asetra and Sevruga were analysed by Raman spectroscopy in the range 1995 cm(-1) to 545 cm(-1). Also, 60 samples from combinations of every two types were examined. The chemical origin of the samples was identified by reference measurements on pure samples. Linear chemometric methods like Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were used for data visualisation and classification which permitted clear distinction between different caviars. Non-linear methods like Artificial Neural Networks (ANN) were used to classify caviar samples. Two different networks were tested in the classification: Probabilistic Neural Network with Radial-Basis Function (PNN) and Multilayer Feed Forward Networks with Back Propagation (BP-NN). In both cases, scores of principal components (PCs) were chosen as input nodes for the input layer in PC-ANN models in order to reduce the redundancy of data and time of training. Leave One Out (LOO) cross validation was applied in order to check the performance of the networks. Results of PCA indicated that, features like type and purity can be used to discriminate different caviar samples. These findings were also supported by LDA with efficiency between 83.77% and 100%. These results were confirmed with the results obtained by developed PC-ANN models, able to classify pure caviar samples with 93.55% and 71.00% accuracy in BP network and PNN, respectively. In comparison, LDA, PNN and BP-NN models for predicting caviar types have 90.3%, 73.1% and 91.4% accuracy. Partial least squares regression (PLSR) models were built under cross validation and tested with different independent data sets, yielding determination coefficients (R(2)) of 0.86, 0.83, 0.92 and 0.91 with root mean square error (RMSE) of validation of 0.32, 0.11, 0.03 and 0.09 for fatty acids of 16.0, 20.5, 22.6 and fat, respectively. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
On the Fallibility of Principal Components in Research
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong
2017-01-01
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Correlation between centrality metrics and their application to the opinion model
NASA Astrophysics Data System (ADS)
Li, Cong; Li, Qian; Van Mieghem, Piet; Stanley, H. Eugene; Wang, Huijuan
2015-03-01
In recent decades, a number of centrality metrics describing network properties of nodes have been proposed to rank the importance of nodes. In order to understand the correlations between centrality metrics and to approximate a high-complexity centrality metric by a strongly correlated low-complexity metric, we first study the correlation between centrality metrics in terms of their Pearson correlation coefficient and their similarity in ranking of nodes. In addition to considering the widely used centrality metrics, we introduce a new centrality measure, the degree mass. The mth-order degree mass of a node is the sum of the weighted degree of the node and its neighbors no further than m hops away. We find that the betweenness, the closeness, and the components of the principal eigenvector of the adjacency matrix are strongly correlated with the degree, the 1st-order degree mass and the 2nd-order degree mass, respectively, in both network models and real-world networks. We then theoretically prove that the Pearson correlation coefficient between the principal eigenvector and the 2nd-order degree mass is larger than that between the principal eigenvector and a lower order degree mass. Finally, we investigate the effect of the inflexible contrarians selected based on different centrality metrics in helping one opinion to compete with another in the inflexible contrarian opinion (ICO) model. Interestingly, we find that selecting the inflexible contrarians based on the leverage, the betweenness, or the degree is more effective in opinion-competition than using other centrality metrics in all types of networks. This observation is supported by our previous observations, i.e., that there is a strong linear correlation between the degree and the betweenness, as well as a high centrality similarity between the leverage and the degree.
A Principal Component Analysis of 39 Scientific Impact Measures
Bollen, Johan; Van de Sompel, Herbert
2009-01-01
Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078
Total Electron Content forecast model over Australia
NASA Astrophysics Data System (ADS)
Bouya, Zahra; Terkildsen, Michael; Francis, Matthew
Ionospheric perturbations can cause serious propagation errors in modern radio systems such as Global Navigation Satellite Systems (GNSS). Forecasting ionospheric parameters is helpful to estimate potential degradation of the performance of these systems. Our purpose is to establish an Australian Regional Total Electron Content (TEC) forecast model at IPS. In this work we present an approach based on the combined use of the Principal Component Analysis (PCA) and Artificial Neural Network (ANN) to predict future TEC values. PCA is used to reduce the dimensionality of the original TEC data by mapping it into its eigen-space. In this process the top- 5 eigenvectors are chosen to reflect the directions of the maximum variability. An ANN approach was then used for the multicomponent prediction. We outline the design of the ANN model with its parameters. A number of activation functions along with different spectral ranges and different numbers of Principal Components (PCs) were tested to find the PCA-ANN models reaching the best results. Keywords: GNSS, Space Weather, Regional, Forecast, PCA, ANN.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Direct process estimation from tomographic data using artificial neural systems
NASA Astrophysics Data System (ADS)
Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.
2001-07-01
The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.
Classification and pose estimation of objects using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.
[Research on hyperspectral remote sensing in monitoring snow contamination concentration].
Tang, Xu-guang; Liu, Dian-wei; Zhang, Bai; Du, Jia; Lei, Xiao-chun; Zeng, Li-hong; Wang, Yuan-dong; Song, Kai-shan
2011-05-01
Contaminants in the snow can be used to reflect regional and global environmental pollution caused by human activities. However, so far, the research on space-time monitoring of snow contamination concentration for a wide range or areas difficult for human to reach is very scarce. In the present paper, based on the simulated atmospheric deposition experiments, the spectroscopy technique method was applied to analyze the effect of different contamination concentration on the snow reflectance spectra. Then an evaluation of snow contamination concentration (SCC) retrieval methods was conducted using characteristic index method (SDI), principal component analysis (PCA), BP neural network and RBF neural network method, and the estimate effects of four methods were compared. The results showed that the neural network model combined with hyperspectral remote sensing data could estimate the SCC well.
Analysis of intracerebral EEG recordings of epileptic spikes: insights from a neural network model
Demont-Guignard, Sophie; Benquet, Pascal; Gerber, Urs; Wendling, Fabrice
2009-01-01
The pathophysiological interpretation of EEG signals recorded with depth electrodes (i.e. local field potentials, LFPs) during interictal (between seizures) or ictal (during seizures) periods is fundamental in the pre-surgical evaluation of patients with drug-resistant epilepsy. Our objective was to explain specific shape features of interictal spikes in the hippocampus (observed in LFPs) in terms of cell and network-related parameters of neuronal circuits that generate these events. We developed a neural network model based on “minimal” but biologically-relevant neuron models interconnected through GABAergic and glutamatergic synapses that reproduces the main physiological features of the CA1 subfield. Simulated LFPs were obtained by solving the forward problem (dipole theory) from networks including a large number (~3000) of cells. Insertion of appropriate parameters allowed the model to simulate events that closely resemble actual epileptic spikes. Moreover, the shape of the early fast component (‘spike’) and the late slow component (‘negative wave’) was linked to the relative contribution of glutamatergic and GABAergic synaptic currents in pyramidal cells. In addition, the model provides insights about the sensitivity of electrode localization with respect to recorded tissue volume and about the relationship between the LFP and the intracellular activity of principal cells and interneurons represented in the network. PMID:19651549
Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin
NASA Astrophysics Data System (ADS)
Downey, Austin; Laflamme, Simon; Ubertini, Filippo
2016-12-01
The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces.
Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate
Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.
2009-01-01
Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786
Harnessing the Power of Teacher Networks
ERIC Educational Resources Information Center
Farley-Ripple, Elizabeth N.; Buttram, Joan L.
2013-01-01
Teacher networks are an important lever for helping schools make change. In order to take advantage of teacher networks, principals must map the existing networks in their schools, identifying teachers and others who serve as experts or advice givers, brokers, and advice seekers. Once these are known, principals can decide on a strategy for…
Neuro-classification of multi-type Landsat Thematic Mapper data
NASA Technical Reports Server (NTRS)
Zhuang, Xin; Engel, Bernard A.; Fernandez, R. N.; Johannsen, Chris J.
1991-01-01
Neural networks have been successful in image classification and have shown potential for classifying remotely sensed data. This paper presents classifications of multitype Landsat Thematic Mapper (TM) data using neural networks. The Landsat TM Image for March 23, 1987 with accompanying ground observation data for a study area In Miami County, Indiana, U.S.A. was utilized to assess recognition of crop residues. Principal components and spectral ratio transformations were performed on the TM data. In addition, a layer of the geographic information system (GIS) for the study site was incorporated to generate GIS-enhanced TM data. This paper discusses (1) the performance of neuro-classification on each type of data, (2) how neural networks recognized each type of data as a new image and (3) comparisons of the results for each type of data obtained using neural networks, maximum likelihood, and minimum distance classifiers.
Maurer, Christian; Federolf, Peter; von Tscharner, Vinzenz; Stirling, Lisa; Nigg, Benno M
2012-05-01
Changes in gait kinematics have often been analyzed using pattern recognition methods such as principal component analysis (PCA). It is usually just the first few principal components that are analyzed, because they describe the main variability within a dataset and thus represent the main movement patterns. However, while subtle changes in gait pattern (for instance, due to different footwear) may not change main movement patterns, they may affect movements represented by higher principal components. This study was designed to test two hypotheses: (1) speed and gender differences can be observed in the first principal components, and (2) small interventions such as changing footwear change the gait characteristics of higher principal components. Kinematic changes due to different running conditions (speed - 3.1m/s and 4.9 m/s, gender, and footwear - control shoe and adidas MicroBounce shoe) were investigated by applying PCA and support vector machine (SVM) to a full-body reflective marker setup. Differences in speed changed the basic movement pattern, as was reflected by a change in the time-dependent coefficient derived from the first principal. Gender was differentiated by using the time-dependent coefficient derived from intermediate principal components. (Intermediate principal components are characterized by limb rotations of the thigh and shank.) Different shoe conditions were identified in higher principal components. This study showed that different interventions can be analyzed using a full-body kinematic approach. Within the well-defined vector space spanned by the data of all subjects, higher principal components should also be considered because these components show the differences that result from small interventions such as footwear changes. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Perfume Fragrance Discrimination Using Resistance And Capacitance Responses Of Polymer Sensors
NASA Astrophysics Data System (ADS)
Lima, John Paul Hempel; Vandendriessche, Thomas; Fonseca, Fernando J.; Lammertyn, Jeroen; Nicolai, Bart M.; de Andrade, Adnei Melges
2009-05-01
This work shows a comparison between electrical resistance and capacitance responses of ethanol and five different fragrances using an electronic nose based on conducting polymers. Gas chromatography—mass spectrometry (GC-MS) measurements were performed to evaluate the main differences between the analytes. It is shown that although the fragrances are quite similar in their compositions the sensors are able to discriminate them through PCA (Principal Component Analysis) and ANNs (Artificial Neural Network) analysis.
NASA Astrophysics Data System (ADS)
Lu, Mingyu; Qu, Yongwei; Lu, Ye; Ye, Lin; Zhou, Limin; Su, Zhongqing
2012-04-01
An experimental study is reported in this paper demonstrating monitoring of surface-fatigue crack propagation in a welded steel angle structure using Lamb waves generated by an active piezoceramic transducer (PZT) network which was freely surface-mounted for each PZT transducer to serve as either actuator or sensor. The fatigue crack was initiated and propagated in welding zone of a steel angle structure by three-point bending fatigue tests. Instead of directly comparing changes between a series of specific signal segments such as S0 and A0 wave modes scattered from fatigue crack tips, a variety of signal statistical parameters representing five different structural status obtained from marginal spectrum in Hilbert-huang transform (HHT), indicating energy progressive distribution along time period in the frequency domain including all wave modes of one wave signal were employed to classify and distinguish different structural conditions due to fatigue crack initiation and propagation with the combination of using principal component analysis (PCA). Results show that PCA based on marginal spectrum is effective and sensitive for monitoring the growth of fatigue crack although the received signals are extremely complicated due to wave scattered from weld, multi-boundaries, notch and fatigue crack. More importantly, this method indicates good potential for identification of integrity status of complicated structures which cause uncertain wave patterns and ambiguous sensor network arrangement.
NASA Astrophysics Data System (ADS)
Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi
2013-02-01
A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
Cocco, S; Monasson, R; Sessak, V
2011-05-01
We consider the problem of inferring the interactions between a set of N binary variables from the knowledge of their frequencies and pairwise correlations. The inference framework is based on the Hopfield model, a special case of the Ising model where the interaction matrix is defined through a set of patterns in the variable space, and is of rank much smaller than N. We show that maximum likelihood inference is deeply related to principal component analysis when the amplitude of the pattern components ξ is negligible compared to √N. Using techniques from statistical mechanics, we calculate the corrections to the patterns to the first order in ξ/√N. We stress the need to generalize the Hopfield model and include both attractive and repulsive patterns in order to correctly infer networks with sparse and strong interactions. We present a simple geometrical criterion to decide how many attractive and repulsive patterns should be considered as a function of the sampling noise. We moreover discuss how many sampled configurations are required for a good inference, as a function of the system size N and of the amplitude ξ. The inference approach is illustrated on synthetic and biological data.
Information Flow Between Resting-State Networks.
Diez, Ibai; Erramuzpe, Asier; Escudero, Iñaki; Mateos, Beatriz; Cabrera, Alberto; Marinazzo, Daniele; Sanz-Arigita, Ernesto J; Stramaglia, Sebastiano; Cortes Diaz, Jesus M
2015-11-01
The resting brain dynamics self-organize into a finite number of correlated patterns known as resting-state networks (RSNs). It is well known that techniques such as independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting-state magnetic resonance imaging. After hemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of transfer entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k=1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k≥1, our method calculates the k multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension dependent, increasing from k=1 (i.e., the average voxel activity) up to a maximum occurring at k=5 and to finally decay to zero for k≥10. This suggests that a small number of components (close to five) is sufficient to describe the IF pattern between RSNs. Our method--addressing differences in IF between RSNs for any generic data--can be used for group comparison in health or disease. To illustrate this, we have calculated the inter-RSN IF in a data set of Alzheimer's disease (AD) to find that the most significant differences between AD and controls occurred for k=2, in addition to AD showing increased IF w.r.t. The spatial localization of the k=2 component, within RSNs, allows the characterization of IF differences between AD and controls.
Using Complex Networks to Characterize International Business Cycles
Caraiani, Petre
2013-01-01
Background There is a rapidly expanding literature on the application of complex networks in economics that focused mostly on stock markets. In this paper, we discuss an application of complex networks to study international business cycles. Methodology/Principal Findings We construct complex networks based on GDP data from two data sets on G7 and OECD economies. Besides the well-known correlation-based networks, we also use a specific tool for presenting causality in economics, the Granger causality. We consider different filtering methods to derive the stationary component of the GDP series for each of the countries in the samples. The networks were found to be sensitive to the detrending method. While the correlation networks provide information on comovement between the national economies, the Granger causality networks can better predict fluctuations in countries’ GDP. By using them, we can obtain directed networks allows us to determine the relative influence of different countries on the global economy network. The US appears as the key player for both the G7 and OECD samples. Conclusion The use of complex networks is valuable for understanding the business cycle comovements at an international level. PMID:23483979
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Grid Transmission Expansion Planning Model Based on Grid Vulnerability
NASA Astrophysics Data System (ADS)
Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang
2018-03-01
Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.
Spatial correlation of auroral zone geomagnetic variations
NASA Astrophysics Data System (ADS)
Jackel, B. J.; Davalos, A.
2016-12-01
Magnetic field perturbations in the auroral zone are produced by a combination of distant ionospheric and local ground induced currents. Spatial and temporal structure of these currents is scientifically interesting and can also have a significant influence on critical infrastructure.Ground-based magnetometer networks are an essential tool for studying these phenomena, with the existing complement of instruments in Canada providing extended local time coverage. In this study we examine the spatial correlation between magnetic field observations over a range of scale lengths. Principal component and canonical correlation analysis are used to quantify relationships between multiple sites. Results could be used to optimize network configurations, validate computational models, and improve methods for empirical interpolation.
The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores
ERIC Educational Resources Information Center
Velicer, Wayne F.
1976-01-01
Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)
The Butterflies of Principal Components: A Case of Ultrafine-Grained Polyphase Units
NASA Astrophysics Data System (ADS)
Rietmeijer, F. J. M.
1996-03-01
Dusts in the accretion regions of chondritic interplanetary dust particles [IDPs] consisted of three principal components: carbonaceous units [CUs], carbon-bearing chondritic units [GUs] and carbon-free silicate units [PUs]. Among others, differences among chondritic IDP morphologies and variable bulk C/Si ratios reflect variable mixtures of principal components. The spherical shapes of the initially amorphous principal components remain visible in many chondritic porous IDPs but fusion was documented for CUs, GUs and PUs. The PUs occur as coarse- and ultrafine-grained units that include so called GEMS. Spherical principal components preserved in an IDP as recognisable textural units have unique proporties with important implications for their petrological evolution from pre-accretion processing to protoplanet alteration and dynamic pyrometamorphism. Throughout their lifetime the units behaved as closed-systems without chemical exchange with other units. This behaviour is reflected in their mineralogies while the bulk compositions of principal components define the environments wherein they were formed.
Graph Frequency Analysis of Brain Signals
Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro
2016-01-01
This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325
Inferring the interplay between network structure and market effects in Bitcoin
NASA Astrophysics Data System (ADS)
Kondor, Dániel; Csabai, István; Szüle, János; Pósfai, Márton; Vattay, Gábor
2014-12-01
A main focus in economics research is understanding the time series of prices of goods and assets. While statistical models using only the properties of the time series itself have been successful in many aspects, we expect to gain a better understanding of the phenomena involved if we can model the underlying system of interacting agents. In this article, we consider the history of Bitcoin, a novel digital currency system, for which the complete list of transactions is available for analysis. Using this dataset, we reconstruct the transaction network between users and analyze changes in the structure of the subgraph induced by the most active users. Our approach is based on the unsupervised identification of important features of the time variation of the network. Applying the widely used method of Principal Component Analysis to the matrix constructed from snapshots of the network at different times, we are able to show how structural changes in the network accompany significant changes in the exchange price of bitcoins.
The Accounting Network: How Financial Institutions React to Systemic Crisis
Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio
2016-01-01
The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies’ financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001–2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities’ heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis. PMID:27736865
The Accounting Network: How Financial Institutions React to Systemic Crisis.
Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio
2016-01-01
The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies' financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001-2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities' heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis.
Foch, Eric; Milner, Clare E
2014-01-03
Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.
Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C
2018-06-01
The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Metzak, Paul D.; Riley, Jennifer D.; Wang, Liang; Whitman, Jennifer C.; Ngan, Elton T. C.; Woodward, Todd S.
2012-01-01
Working memory (WM) is one of the most impaired cognitive processes in schizophrenia. Functional magnetic resonance imaging (fMRI) studies in this area have typically found a reduction in information processing efficiency but have focused on the dorsolateral prefrontal cortex. In the current study using the Sternberg Item Recognition Test, we consider networks of regions supporting WM and measure the activation of functionally connected neural networks over different WM load conditions. We used constrained principal component analysis with a finite impulse response basis set to compare the estimated hemodynamic response associated with different WM load condition for 15 healthy control subjects and 15 schizophrenia patients. Three components emerged, reflecting activated (task-positive) and deactivated (task-negative or default-mode) neural networks. Two of the components (with both task-positive and task-negative aspects) were load dependent, were involved in encoding and delay phases (one exclusively encoding and the other both encoding and delay), and both showed evidence for decreased efficiency in patients. The results suggest that WM capacity is reached sooner for schizophrenia patients as the overt levels of WM load increase, to the point that further increases in overt memory load do not increase fMRI activation, and lead to performance impairments. These results are consistent with an account holding that patients show reduced efficiency in task-positive and task-negative networks during WM and also partially support the shifted inverted-U-shaped curve theory of the relationship between WM load and fMRI activation in schizophrenia. PMID:21224491
Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks
NASA Astrophysics Data System (ADS)
Shi, Ying; Jian, Shaoyong
2018-03-01
an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.
Symbolic dynamic filtering and language measure for behavior identification of mobile robots.
Mallapragada, Goutham; Ray, Asok; Jin, Xin
2012-06-01
This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.
Stratmann, Philipp; Lakatos, Dominic; Albu-Schäffer, Alin
2016-01-01
There are multiple indications that the nervous system of animals tunes muscle output to exploit natural dynamics of the elastic locomotor system and the environment. This is an advantageous strategy especially in fast periodic movements, since the elastic elements store energy and increase energy efficiency and movement speed. Experimental evidence suggests that coordination among joints involves proprioceptive input and neuromodulatory influence originating in the brain stem. However, the neural strategies underlying the coordination of fast periodic movements remain poorly understood. Based on robotics control theory, we suggest that the nervous system implements a mechanism to accomplish coordination between joints by a linear coordinate transformation from the multi-dimensional space representing proprioceptive input at the joint level into a one-dimensional controller space. In this one-dimensional subspace, the movements of a whole limb can be driven by a single oscillating unit as simple as a reflex interneuron. The output of the oscillating unit is transformed back to joint space via the same transformation. The transformation weights correspond to the dominant principal component of the movement. In this study, we propose a biologically plausible neural network to exemplify that the central nervous system (CNS) may encode our controller design. Using theoretical considerations and computer simulations, we demonstrate that spike-timing-dependent plasticity (STDP) for the input mapping and serotonergic neuromodulation for the output mapping can extract the dominant principal component of sensory signals. Our simulations show that our network can reliably control mechanical systems of different complexity and increase the energy efficiency of ongoing cyclic movements. The proposed network is simple and consistent with previous biologic experiments. Thus, our controller could serve as a candidate to describe the neural control of fast, energy-efficient, periodic movements involving multiple coupled joints.
Stratmann, Philipp; Lakatos, Dominic; Albu-Schäffer, Alin
2016-01-01
There are multiple indications that the nervous system of animals tunes muscle output to exploit natural dynamics of the elastic locomotor system and the environment. This is an advantageous strategy especially in fast periodic movements, since the elastic elements store energy and increase energy efficiency and movement speed. Experimental evidence suggests that coordination among joints involves proprioceptive input and neuromodulatory influence originating in the brain stem. However, the neural strategies underlying the coordination of fast periodic movements remain poorly understood. Based on robotics control theory, we suggest that the nervous system implements a mechanism to accomplish coordination between joints by a linear coordinate transformation from the multi-dimensional space representing proprioceptive input at the joint level into a one-dimensional controller space. In this one-dimensional subspace, the movements of a whole limb can be driven by a single oscillating unit as simple as a reflex interneuron. The output of the oscillating unit is transformed back to joint space via the same transformation. The transformation weights correspond to the dominant principal component of the movement. In this study, we propose a biologically plausible neural network to exemplify that the central nervous system (CNS) may encode our controller design. Using theoretical considerations and computer simulations, we demonstrate that spike-timing-dependent plasticity (STDP) for the input mapping and serotonergic neuromodulation for the output mapping can extract the dominant principal component of sensory signals. Our simulations show that our network can reliably control mechanical systems of different complexity and increase the energy efficiency of ongoing cyclic movements. The proposed network is simple and consistent with previous biologic experiments. Thus, our controller could serve as a candidate to describe the neural control of fast, energy-efficient, periodic movements involving multiple coupled joints. PMID:27014051
ERIC Educational Resources Information Center
Rigby, Jessica G.
2016-01-01
First-year principals encounter multiple messages about what it means to be instructional leaders; this may matter for how they enact instructional leadership. This cross-case qualitative study uses a qualitative approach of social network analysis to uncover the mechanisms through which first-year principals encountered particular beliefs about…
Kuo, Ching-Chang; Ha, Thao; Ebbert, Ashley M.; Tucker, Don M.; Dishion, Thomas J.
2017-01-01
Adolescence is a sensitive period for the development of romantic relationships. During this period the maturation of frontolimbic networks is particularly important for the capacity to regulate emotional experiences. In previous research, both functional magnetic resonance imaging (fMRI) and dense array electroencephalography (dEEG) measures have suggested that responses in limbic regions are enhanced in adolescents experiencing social rejection. In the present research, we examined social acceptance and rejection from romantic partners as they engaged in a Chatroom Interact Task. Dual 128-channel dEEG systems were used to record neural responses to acceptance and rejection from both adolescent romantic partners and unfamiliar peers (N = 75). We employed a two-step temporal principal component analysis (PCA) and spatial independent component analysis (ICA) approach to statistically identify the neural components related to social feedback. Results revealed that the early (288 ms) discrimination between acceptance and rejection reflected by the P3a component was significant for the romantic partner but not the unfamiliar peer. In contrast, the later (364 ms) P3b component discriminated between acceptance and rejection for both partners and peers. The two-step approach (PCA then ICA) was better able than either PCA or ICA alone in separating these components of the brain's electrical activity that reflected both temporal and spatial phases of the brain's processing of social feedback. PMID:28620292
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Feng, Lei; Zhu, Susu; Lin, Fucheng; Su, Zhenzhu; Yuan, Kangpei; Zhao, Yiying; He, Yong; Zhang, Chu
2018-06-15
Mildew damage is a major reason for chestnut poor quality and yield loss. In this study, a near-infrared hyperspectral imaging system in the 874⁻1734 nm spectral range was applied to detect the mildew damage to chestnuts caused by blue mold. Principal component analysis (PCA) scored images were firstly employed to qualitatively and intuitively distinguish moldy chestnuts from healthy chestnuts. Spectral data were extracted from the hyperspectral images. A successive projections algorithm (SPA) was used to select 12 optimal wavelengths. Artificial neural networks, including back propagation neural network (BPNN), evolutionary neural network (ENN), extreme learning machine (ELM), general regression neural network (GRNN) and radial basis neural network (RBNN) were used to build models using the full spectra and optimal wavelengths to distinguish moldy chestnuts. BPNN and ENN models using full spectra and optimal wavelengths obtained satisfactory performances, with classification accuracies all surpassing 99%. The results indicate the potential for the rapid and non-destructive detection of moldy chestnuts by hyperspectral imaging, which would help to develop online detection system for healthy and blue mold infected chestnuts.
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Optimum Design of Aerospace Structural Components Using Neural Networks
NASA Technical Reports Server (NTRS)
Berke, L.; Patnaik, S. N.; Murthy, P. L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J
1997-01-01
Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
COMPADRE: an R and web resource for pathway activity analysis by component decompositions.
Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor
2012-10-15
The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
ERIC Educational Resources Information Center
Townsel, Andrae
2015-01-01
The purpose of this study was to examine the principals' perceptions of social networking access and its relationship to cyberbullying, the importance of student achievement, and the school environment across the United States. This research provides some evidence on how principals perceive and understand the threat of cyberbullying and its…
Burnt area mapping from ERS-SAR time series using the principal components transformation
NASA Astrophysics Data System (ADS)
Gimeno, Meritxell; San-Miguel Ayanz, Jesus; Barbosa, Paulo M.; Schmuck, Guido
2003-03-01
Each year thousands of hectares of forest burnt across Southern Europe. To date, remote sensing assessments of this phenomenon have focused on the use of optical satellite imagery. However, the presence of clouds and smoke prevents the acquisition of this type of data in some areas. It is possible to overcome this problem by using synthetic aperture radar (SAR) data. Principal component analysis (PCA) was performed to quantify differences between pre- and post- fire images and to investigate the separability over a European Remote Sensing (ERS) SAR time series. Moreover, the transformation was carried out to determine the best conditions to acquire optimal SAR imagery according to meteorological parameters and the procedures to enhance burnt area discrimination for the identification of fire damage assessment. A comparative neural network classification was performed in order to map and to assess the burnts using a complete ERS time series or just an image before and an image after the fire according to the PCA. The results suggest that ERS is suitable to highlight areas of localized changes associated with forest fire damage in Mediterranean landcover.
Principal Components of Thermography analyses of the Silk Tomb, Petra (Jordan)
NASA Astrophysics Data System (ADS)
Gomez-Heras, Miguel; Alvarez de Buergo, Monica; Fort, Rafael
2015-04-01
This communication presents the results of an active thermography survey of the Silk Tomb, which belongs to the Royal Tombs compound in the archaeological city of Petra in Jordan. The Silk Tomb is carved in the variegated Palaeozoic Umm Ishrin sandstone and it is heavily backweathered due to surface runoff from the top of the cliff where it is carved. Moreover, the name "Silk Tomb" was given because of the colourful display of the variegated sandstone due to backweathering. A series of infrared images were taken as the façade was heated by sunlight to perform a Principal Component of Thermography analyses with IR view 1.7.5 software. This was related to indirect moisture measurements (percentage of Wood Moisture Equivalent) taken across the façade, by means of a Protimeter portable moisture meter. Results show how moisture retention is deeply controlled by lithological differences across the façade. Research funded by Geomateriales 2 S2013/MIT-2914 and CEI Moncloa (UPM, UCM, CSIC) through a PICATA contract and the equipment from RedLAbPAt Network
Nonlinear Principal Components Analysis: Introduction and Application
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.
2007-01-01
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…
USDA-ARS?s Scientific Manuscript database
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
Similarities between principal components of protein dynamics and random diffusion
NASA Astrophysics Data System (ADS)
Hess, Berk
2000-12-01
Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Machine learning action parameters in lattice quantum chromodynamics
NASA Astrophysics Data System (ADS)
Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William
2018-05-01
Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.
Predicting the Fine Particle Fraction of Dry Powder Inhalers Using Artificial Neural Networks.
Muddle, Joanna; Kirton, Stewart B; Parisini, Irene; Muddle, Andrew; Murnane, Darragh; Ali, Jogoth; Brown, Marc; Page, Clive; Forbes, Ben
2017-01-01
Dry powder inhalers are increasingly popular for delivering drugs to the lungs for the treatment of respiratory diseases, but are complex products with multivariate performance determinants. Heuristic product development guided by in vitro aerosol performance testing is a costly and time-consuming process. This study investigated the feasibility of using artificial neural networks (ANNs) to predict fine particle fraction (FPF) based on formulation device variables. Thirty-one ANN architectures were evaluated for their ability to predict experimentally determined FPF for a self-consistent dataset containing salmeterol xinafoate and salbutamol sulfate dry powder inhalers (237 experimental observations). Principal component analysis was used to identify inputs that significantly affected FPF. Orthogonal arrays (OAs) were used to design ANN architectures, optimized using the Taguchi method. The primary OA ANN r 2 values ranged between 0.46 and 0.90 and the secondary OA increased the r 2 values (0.53-0.93). The optimum ANN (9-4-1 architecture, average r 2 0.92 ± 0.02) included active pharmaceutical ingredient, formulation, and device inputs identified by principal component analysis, which reflected the recognized importance and interdependency of these factors for orally inhaled product performance. The Taguchi method was effective at identifying successful architecture with the potential for development as a useful generic inhaler ANN model, although this would require much larger datasets and more variable inputs. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Mapping Common Aphasia Assessments to Underlying Cognitive Processes and Their Neural Substrates.
Lacey, Elizabeth H; Skipper-Kallal, Laura M; Xing, Shihui; Fama, Mackenzie E; Turkeltaub, Peter E
2017-05-01
Understanding the relationships between clinical tests, the processes they measure, and the brain networks underlying them, is critical in order for clinicians to move beyond aphasia syndrome classification toward specification of individual language process impairments. To understand the cognitive, language, and neuroanatomical factors underlying scores of commonly used aphasia tests. Twenty-five behavioral tests were administered to a group of 38 chronic left hemisphere stroke survivors and a high-resolution magnetic resonance image was obtained. Test scores were entered into a principal components analysis to extract the latent variables (factors) measured by the tests. Multivariate lesion-symptom mapping was used to localize lesions associated with the factor scores. The principal components analysis yielded 4 dissociable factors, which we labeled Word Finding/Fluency, Comprehension, Phonology/Working Memory Capacity, and Executive Function. While many tests loaded onto the factors in predictable ways, some relied heavily on factors not commonly associated with the tests. Lesion symptom mapping demonstrated discrete brain structures associated with each factor, including frontal, temporal, and parietal areas extending beyond the classical language network. Specific functions mapped onto brain anatomy largely in correspondence with modern neural models of language processing. An extensive clinical aphasia assessment identifies 4 independent language functions, relying on discrete parts of the left middle cerebral artery territory. A better understanding of the processes underlying cognitive tests and the link between lesion and behavior may lead to improved aphasia diagnosis, and may yield treatments better targeted to an individual's specific pattern of deficits and preserved abilities.
Genetic Classification of Populations Using Supervised Learning
Bridges, Michael; Heron, Elizabeth A.; O'Dushlaine, Colm; Segurado, Ricardo; Morris, Derek; Corvin, Aiden; Gill, Michael; Pinto, Carlos
2011-01-01
There are many instances in genetics in which we wish to determine whether two candidate populations are distinguishable on the basis of their genetic structure. Examples include populations which are geographically separated, case–control studies and quality control (when participants in a study have been genotyped at different laboratories). This latter application is of particular importance in the era of large scale genome wide association studies, when collections of individuals genotyped at different locations are being merged to provide increased power. The traditional method for detecting structure within a population is some form of exploratory technique such as principal components analysis. Such methods, which do not utilise our prior knowledge of the membership of the candidate populations. are termed unsupervised. Supervised methods, on the other hand are able to utilise this prior knowledge when it is available. In this paper we demonstrate that in such cases modern supervised approaches are a more appropriate tool for detecting genetic differences between populations. We apply two such methods, (neural networks and support vector machines) to the classification of three populations (two from Scotland and one from Bulgaria). The sensitivity exhibited by both these methods is considerably higher than that attained by principal components analysis and in fact comfortably exceeds a recently conjectured theoretical limit on the sensitivity of unsupervised methods. In particular, our methods can distinguish between the two Scottish populations, where principal components analysis cannot. We suggest, on the basis of our results that a supervised learning approach should be the method of choice when classifying individuals into pre-defined populations, particularly in quality control for large scale genome wide association studies. PMID:21589856
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Effect of the interconnected network structure on the epidemic threshold.
Wang, Huijuan; Li, Qian; D'Agostino, Gregorio; Havlin, Shlomo; Stanley, H Eugene; Van Mieghem, Piet
2013-08-01
Most real-world networks are not isolated. In order to function fully, they are interconnected with other networks, and this interconnection influences their dynamic processes. For example, when the spread of a disease involves two species, the dynamics of the spread within each species (the contact network) differs from that of the spread between the two species (the interconnected network). We model two generic interconnected networks using two adjacency matrices, A and B, in which A is a 2N×2N matrix that depicts the connectivity within each of two networks of size N, and B a 2N×2N matrix that depicts the interconnections between the two. Using an N-intertwined mean-field approximation, we determine that a critical susceptible-infected-susceptible (SIS) epidemic threshold in two interconnected networks is 1/λ(1)(A+αB), where the infection rate is β within each of the two individual networks and αβ in the interconnected links between the two networks and λ(1)(A+αB) is the largest eigenvalue of the matrix A+αB. In order to determine how the epidemic threshold is dependent upon the structure of interconnected networks, we analytically derive λ(1)(A+αB) using a perturbation approximation for small and large α, the lower and upper bound for any α as a function of the adjacency matrix of the two individual networks, and the interconnections between the two and their largest eigenvalues and eigenvectors. We verify these approximation and boundary values for λ(1)(A+αB) using numerical simulations, and determine how component network features affect λ(1)(A+αB). We note that, given two isolated networks G(1) and G(2) with principal eigenvectors x and y, respectively, λ(1)(A+αB) tends to be higher when nodes i and j with a higher eigenvector component product x(i)y(j) are interconnected. This finding suggests essential insights into ways of designing interconnected networks to be robust against epidemics.
Effect of the interconnected network structure on the epidemic threshold
NASA Astrophysics Data System (ADS)
Wang, Huijuan; Li, Qian; D'Agostino, Gregorio; Havlin, Shlomo; Stanley, H. Eugene; Van Mieghem, Piet
2013-08-01
Most real-world networks are not isolated. In order to function fully, they are interconnected with other networks, and this interconnection influences their dynamic processes. For example, when the spread of a disease involves two species, the dynamics of the spread within each species (the contact network) differs from that of the spread between the two species (the interconnected network). We model two generic interconnected networks using two adjacency matrices, A and B, in which A is a 2N×2N matrix that depicts the connectivity within each of two networks of size N, and B a 2N×2N matrix that depicts the interconnections between the two. Using an N-intertwined mean-field approximation, we determine that a critical susceptible-infected-susceptible (SIS) epidemic threshold in two interconnected networks is 1/λ1(A+αB), where the infection rate is β within each of the two individual networks and αβ in the interconnected links between the two networks and λ1(A+αB) is the largest eigenvalue of the matrix A+αB. In order to determine how the epidemic threshold is dependent upon the structure of interconnected networks, we analytically derive λ1(A+αB) using a perturbation approximation for small and large α, the lower and upper bound for any α as a function of the adjacency matrix of the two individual networks, and the interconnections between the two and their largest eigenvalues and eigenvectors. We verify these approximation and boundary values for λ1(A+αB) using numerical simulations, and determine how component network features affect λ1(A+αB). We note that, given two isolated networks G1 and G2 with principal eigenvectors x and y, respectively, λ1(A+αB) tends to be higher when nodes i and j with a higher eigenvector component product xiyj are interconnected. This finding suggests essential insights into ways of designing interconnected networks to be robust against epidemics.
Thiele, Ines; Fleming, Ronan M.T.; Bordbar, Aarash; Schellenberger, Jan; Palsson, Bernhard Ø.
2010-01-01
Abstract The constraint-based reconstruction and analysis approach has recently been extended to describe Escherichia coli's transcriptional and translational machinery. Here, we introduce the concept of reaction coupling to represent the dependency between protein synthesis and utilization. These coupling constraints lead to a significant contraction of the feasible set of steady-state fluxes. The subset of alternate optimal solutions (AOS) consistent with maximal ribosome production was calculated. The majority of transcriptional and translational reactions were active for all of these AOS, showing that the network has a low degree of redundancy. Furthermore, all calculated AOS contained the qualitative expression of at least 92% of the known essential genes. Principal component analysis of AOS demonstrated that energy currencies (ATP, GTP, and phosphate) dominate the network's capability to produce ribosomes. Additionally, we identified regulatory control points of the network, which include the transcription reactions of σ70 (RpoD) as well as that of a degradosome component (Rne) and of tRNA charging (ValS). These reactions contribute significant variance among AOS. These results show that constraint-based modeling can be applied to gain insight into the systemic properties of E. coli's transcriptional and translational machinery. PMID:20483314
Valdés, Julio J; Barton, Alan J
2007-05-01
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
Optimization of a Multi-Stage ATR System for Small Target Identification
NASA Technical Reports Server (NTRS)
Lin, Tsung-Han; Lu, Thomas; Braun, Henry; Edens, Western; Zhang, Yuhan; Chao, Tien- Hsin; Assad, Christopher; Huntsberger, Terrance
2010-01-01
An Automated Target Recognition system (ATR) was developed to locate and target small object in images and videos. The data is preprocessed and sent to a grayscale optical correlator (GOC) filter to identify possible regionsof- interest (ROIs). Next, features are extracted from ROIs based on Principal Component Analysis (PCA) and sent to neural network (NN) to be classified. The features are analyzed by the NN classifier indicating if each ROI contains the desired target or not. The ATR system was found useful in identifying small boats in open sea. However, due to "noisy background," such as weather conditions, background buildings, or water wakes, some false targets are mis-classified. Feedforward backpropagation and Radial Basis neural networks are optimized for generalization of representative features to reduce false-alarm rate. The neural networks are compared for their performance in classification accuracy, classifying time, and training time.
Baseline estimation in flame's spectra by using neural networks and robust statistics
NASA Astrophysics Data System (ADS)
Garces, Hugo; Arias, Luis; Rojas, Alejandro
2014-09-01
This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.
Spectral analysis of stellar light curves by means of neural networks
NASA Astrophysics Data System (ADS)
Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.
1999-06-01
Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.
NASA Astrophysics Data System (ADS)
Tian, Yunfeng; Shen, Zheng-Kang
2016-02-01
We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.
An Introductory Application of Principal Components to Cricket Data
ERIC Educational Resources Information Center
Manage, Ananda B. W.; Scariano, Stephen M.
2013-01-01
Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
Finding Planets in K2: A New Method of Cleaning the Data
NASA Astrophysics Data System (ADS)
Currie, Miles; Mullally, Fergal; Thompson, Susan E.
2017-01-01
We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.
Directly reconstructing principal components of heterogeneous particles from cryo-EM images.
Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali
2015-08-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.
Interdependencies and Causalities in Coupled Financial Networks.
Vodenska, Irena; Aoyama, Hideaki; Fujiwara, Yoshi; Iyetomi, Hiroshi; Arai, Yuta
2016-01-01
We explore the foreign exchange and stock market networks for 48 countries from 1999 to 2012 and propose a model, based on complex Hilbert principal component analysis, for extracting significant lead-lag relationships between these markets. The global set of countries, including large and small countries in Europe, the Americas, Asia, and the Middle East, is contrasted with the limited scopes of targets, e.g., G5, G7 or the emerging Asian countries, adopted by previous works. We construct a coupled synchronization network, perform community analysis, and identify formation of four distinct network communities that are relatively stable over time. In addition to investigating the entire period, we divide the time period into into "mild crisis," (1999-2002), "calm," (2003-2006) and "severe crisis" (2007-2012) sub-periods and find that the severe crisis period behavior dominates the dynamics in the foreign exchange-equity synchronization network. We observe that in general the foreign exchange market has predictive power for the global stock market performances. In addition, the United States, German and Mexican markets have forecasting power for the performances of other global equity markets.
A Self-Organizing Incremental Neural Network based on local distribution learning.
Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi
2016-12-01
In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule... management plan. (c) Operator training and qualification. (d) Emission limitations and operating limits. (e...
40 CFR 60.2570 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components of... (k) of this section. (a) Increments of progress toward compliance. (b) Waste management plan. (c...
2014-01-01
Background The chemical composition of aerosols and particle size distributions are the most significant factors affecting air quality. In particular, the exposure to finer particles can cause short and long-term effects on human health. In the present paper PM10 (particulate matter with aerodynamic diameter lower than 10 μm), CO, NOx (NO and NO2), Benzene and Toluene trends monitored in six monitoring stations of Bari province are shown. The data set used was composed by bi-hourly means for all parameters (12 bi-hourly means per day for each parameter) and it’s referred to the period of time from January 2005 and May 2007. The main aim of the paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM10 concentration level by using multivariate statistical techniques such as Principal Component Analysis (PCA) and Absolute Principal Component Scores (APCS). Results Comparing the night and day mean concentrations (per day) for each parameter it has been pointed out that there is a different night and day behavior for some parameters such as CO, Benzene and Toluene than PM10. This suggests that CO, Benzene and Toluene concentrations are mainly connected with transport systems, whereas PM10 is mostly influenced by different factors. The statistical techniques identified three recurrent sources, associated with vehicular traffic and particulate transport, covering over 90% of variance. The contemporaneous analysis of gas and PM10 has allowed underlining the differences between the sources of these pollutants. Conclusions The analysis of the pollutant trends from large data set and the application of multivariate statistical techniques such as PCA and APCS can give useful information about air quality and pollutant’s sources. These knowledge can provide useful advices to environmental policies in order to reach the WHO recommended levels. PMID:24555534
Differentiation of red wines using an electronic nose based on surface acoustic wave devices.
García, M; Fernández, M J; Fontecha, J L; Lozano, J; Santos, J P; Aleixandre, M; Sayago, I; Gutiérrez, J; Horrillo, M C
2006-02-15
An electronic nose, utilizing the principle of surface acoustic waves (SAW), was used to differentiate among different wines of the same variety of grapes which come from the same cellar. The electronic nose is based on eight surface acoustic wave sensors, one is a reference sensor and the others are coated by different polymers by spray coating technique. Data analysis was performed by two pattern recognition methods; principal component analysis (PCA) and probabilistic neuronal network (PNN). The results showed that electronic nose was able to identify the tested wines.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Feature extraction in MFL signals of machined defects in steel tubes
NASA Astrophysics Data System (ADS)
Perazzo, R.; Pignotti, A.; Reich, S.; Stickar, P.
2001-04-01
Thirty defects of various shapes were machined on the external and internal wall surfaces of a 177 mm diameter ferromagnetic steel pipe. MFL signals were digitized and recorded at a frequency of 4 Khz. Various magnetizing currents and relative tube-probe velocities of the order of 2m/s were used. The identification of the location of the defect by a principal component/neural network analysis of the signal is shown to be more effective than the standard procedure of classification based on the average signal frequency.
Maisuradze, Gia G; Leitner, David M
2007-05-15
Dihedral principal component analysis (dPCA) has recently been developed and shown to display complex features of the free energy landscape of a biomolecule that may be absent in the free energy landscape plotted in principal component space due to mixing of internal and overall rotational motion that can occur in principal component analysis (PCA) [Mu et al., Proteins: Struct Funct Bioinfo 2005;58:45-52]. Another difficulty in the implementation of PCA is sampling convergence, which we address here for both dPCA and PCA using a tetrapeptide as an example. We find that for both methods the sampling convergence can be reached over a similar time. Minima in the free energy landscape in the space of the two largest dihedral principal components often correspond to unique structures, though we also find some distinct minima to correspond to the same structure. 2007 Wiley-Liss, Inc.
Philip, Jacques; Ford, Tara; Henry, David; Rasmus, Stacy; Allen, James
2015-01-01
Suicide and alcohol use disorders are significant Alaska Native health disparities, yet there is limited understanding of protection and no studies of social network factors in protection in this or other populations. The Qungasvik intervention enhances protective factors from suicide and alcohol use disorders through activities grounded in Yup’ik cultural practices and values. Identification of social network factors associated with protection within the cultural context of these tight, close knit, and high density rural Yup’ik Alaska Native communities in southwest Alaska can help identify effective prevention strategies for suicide and alcohol use disorder risk. Using data from ego-centered social network and protective factors from suicide and alcohol use disorders surveys with 50 Yup’ik adolescents, we provide descriptive data on structural and network composition variables, identify key network variables that explain major proportions of the variance in a four principal component structure of these network variables, and demonstrate the utility of these key network variables as predictors of family and community protective factors from suicide and alcohol use disorder risk. Connections to adults and connections to elders, but not peer connections, emerged as predictors of family and community level protection, suggesting these network factors as important intervention targets for intervention. PMID:27110094
Facilitative Components of Collaborative Learning: A Review of Nine Health Research Networks
Rittner, Jessica Levin; Johnson, Karin E.; Gerteis, Jessie; Miller, Therese
2017-01-01
Objective: Collaborative research networks are increasingly used as an effective mechanism for accelerating knowledge transfer into policy and practice. This paper explored the characteristics and collaborative learning approaches of nine health research networks. Data sources/study setting: Semi-structured interviews with representatives from eight diverse US health services research networks conducted between November 2012 and January 2013 and program evaluation data from a ninth. Study design: The qualitative analysis assessed each network's purpose, duration, funding sources, governance structure, methods used to foster collaboration, and barriers and facilitators to collaborative learning. Data collection: The authors reviewed detailed notes from the interviews to distill salient themes. Principal findings: Face-to-face meetings, intentional facilitation and communication, shared vision, trust among members and willingness to work together were key facilitators of collaborative learning. Competing priorities for members, limited funding and lack of long-term support and geographic dispersion were the main barriers to coordination and collaboration across research network members. Conclusion: The findings illustrate the importance of collaborative learning in research networks and the challenges to evaluating the success of research network functionality. Conducting readiness assessments and developing process and outcome evaluation metrics will advance the design and show the impact of collaborative research networks. PMID:28277202
Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.
2013-01-01
Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398
ERIC Educational Resources Information Center
Duffrin, Elizabeth
2001-01-01
Describes the Leadership Academy and Urban Network for Chicago (LAUNCH), a joint venture between the Chicago Public Schools, the local principal's association, and Northwestern University which pairs aspiring principals with practicing principals, offering them a chance to experience principal responsibilities. LAUNCH graduates who became…
Kalgin, Igor V; Caflisch, Amedeo; Chekmarev, Sergei F; Karplus, Martin
2013-05-23
A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted principal component analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed, and it is complemented by a hydrodynamic analysis. The latter, making use of "passive tracers" in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this "hydrogen atom" of protein folding.
Research in Network Management Techniques for Tactical Data Communications Networks.
1982-09-01
COMPUTER COMMUNICATIONS US A.RMY (CECOM) V September 1980 to August 1982 Principal Investigatoi Robert Boorstyn Aaron Kershenbaum DTIC Basil Niaglaris Philip...COMMUNICATIONS US ARMY (CECOM) September 1980 to August 1982 Principal Investigators: Robert Boorstyn Aaron Kershenbaum Basil Maglaris Philip Sarachik...TABLE OF CONTENTS Summary of Report Personnel Activities Research Reports / , A. Packet Radio Networks A.1 Throughput Analysis of Multihop Packet
Visible Leading: Principal Academy Connects and Empowers Principals
ERIC Educational Resources Information Center
Hindman, Jennifer; Rozzelle, Jan; Ball, Rachel; Fahey, John
2015-01-01
The School-University Research Network (SURN) Principal Academy at the College of William & Mary in Williamsburg, Virginia, has a mission to build a leadership development program that increases principals' instructional knowledge and develops mentor principals to sustain the program. The academy is designed to connect and empower principals…
Fast, Exact Bootstrap Principal Component Analysis for p > 1 million
Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim
2015-01-01
Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801
ERIC Educational Resources Information Center
Oplatka, Izhar
2017-01-01
Purpose: In order to fill the gap in theoretical and empirical knowledge about the characteristics of principal workload, the purpose of this paper is to explore the components of principal workload as well as its determinants and the coping strategies commonly used by principals to face this personal state. Design/methodology/approach:…
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Machine learning action parameters in lattice quantum chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shanahan, Phiala; Trewartha, Daneil; Detmold, William
Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less
Quantitation of twelve metals in tequila and mezcal spirits as authenticity parameters.
Ceballos-Magańa, Silvia Guillermina; Jurado, José Marcos; Martín, María Jesús; Pablos, Fernando
2009-02-25
In this paper the differentiation of silver, gold, aged and extra-aged tequila and mezcal has been carried out according to their metal content. Aluminum, barium, calcium, copper, iron, magnesium, manganese, potassium, sodium, strontium, zinc, and sulfur were determined by inductively coupled plasma optical emission spectrometry. The concentrations found for each element in the samples were used as chemical descriptors for characterization purposes. Principal component analysis, linear discriminant analysis and artificial neural networks were applied to differentiate types of tequila and mezcal. Using probabilistic neural networks 100% of success in the classification was obtained for silver, gold, extra-aged tequila and mezcal. In the case of aged tequila 90% of samples were successfully classified. Sodium, potassium, calcium, sulfur, magnesium, iron, strontium, copper and zinc were the most discriminant elements.
Machine learning action parameters in lattice quantum chromodynamics
Shanahan, Phiala; Trewartha, Daneil; Detmold, William
2018-05-16
Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less
NASA Astrophysics Data System (ADS)
Yan, Ying; Zhang, Shen; Tang, Jinjun; Wang, Xiaofei
2017-07-01
Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.
Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold
2014-12-01
In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
Robustness surfaces of complex networks
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-01-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared. PMID:25178402
Mapping common aphasia assessments to underlying cognitive processes and their neural substrates
Lacey, Elizabeth H.; Skipper-Kallal, LM; Xing, S; Fama, ME; Turkeltaub, PE
2017-01-01
Background Understanding the relationships between clinical tests, the processes they measure, and the brain networks underlying them, is critical in order for clinicians to move beyond aphasia syndrome classification toward specification of individual language process impairments. Objective To understand the cognitive, language, and neuroanatomical factors underlying scores of commonly used aphasia tests. Methods 25 behavioral tests were administered to a group of 38 chronic left hemisphere stroke survivors and a high resolution MRI was obtained. Test scores were entered into a principal components analysis to extract the latent variables (factors) measured by the tests. Multivariate lesion-symptom mapping was used to localize lesions associated with the factor scores. Results The principal components analysis yielded four dissociable factors, which we labeled Word Finding/Fluency, Comprehension, Phonology/Working Memory Capacity, and Executive Function. While many tests loaded onto the factors in predictable ways, some relied heavily on factors not commonly associated with the tests. Lesion symptom mapping demonstrated discrete brain structures associated with each factor, including frontal, temporal, and parietal areas extending beyond the classical language network. Specific functions mapped onto brain anatomy largely in correspondence with modern neural models of language processing. Conclusions An extensive clinical aphasia assessment identifies four independent language functions, relying on discrete parts of the left middle cerebral artery territory. A better understanding of the processes underlying cognitive tests and the link between lesion and behavior may lead to improved aphasia diagnosis, and may yield treatments better targeted to an individual’s specific pattern of deficits and preserved abilities. PMID:28135902
NASA Astrophysics Data System (ADS)
Ni, Yongnian; Wang, Yong; Kokot, Serge
2008-10-01
A spectrophotometric method for the simultaneous determination of the important pharmaceuticals, pefloxacin and its structurally similar metabolite, norfloxacin, is described for the first time. The analysis is based on the monitoring of a kinetic spectrophotometric reaction of the two analytes with potassium permanganate as the oxidant. The measurement of the reaction process followed the absorbance decrease of potassium permanganate at 526 nm, and the accompanying increase of the product, potassium manganate, at 608 nm. It was essential to use multivariate calibrations to overcome severe spectral overlaps and similarities in reaction kinetics. Calibration curves for the individual analytes showed linear relationships over the concentration ranges of 1.0-11.5 mg L -1 at 526 and 608 nm for pefloxacin, and 0.15-1.8 mg L -1 at 526 and 608 nm for norfloxacin. Various multivariate calibration models were applied, at the two analytical wavelengths, for the simultaneous prediction of the two analytes including classical least squares (CLS), principal component regression (PCR), partial least squares (PLS), radial basis function-artificial neural network (RBF-ANN) and principal component-radial basis function-artificial neural network (PC-RBF-ANN). PLS and PC-RBF-ANN calibrations with the data collected at 526 nm, were the preferred methods—%RPE T ˜ 5, and LODs for pefloxacin and norfloxacin of 0.36 and 0.06 mg L -1, respectively. Then, the proposed method was applied successfully for the simultaneous determination of pefloxacin and norfloxacin present in pharmaceutical and human plasma samples. The results compared well with those from the alternative analysis by HPLC.
An immune-related lncRNA signature for patients with anaplastic gliomas.
Wang, Wen; Zhao, Zheng; Yang, Fan; Wang, Haoyuan; Wu, Fan; Liang, Tingyu; Yan, Xiaoyan; Li, Jiye; Lan, Qing; Wang, Jiangfei; Zhao, Jizong
2018-01-01
We investigated immune-related long non-coding RNAs (lncRNAs) that may be exploited as potential therapeutic targets in anaplastic gliomas. We obtained 572 lncRNAs and 317 immune genes from the Chinese Glioma Genome Atlas microarray and constructed immune-related lncRNAs co-expression networks to identify immune-related lncRNAs. Two additional datasets (GSE16011, REMBRANDT) were used for validation. Gene set enrichment analysis and principal component analysis were used for functional annotation. Immune-lncRNAs co-expression networks were constructed. Nine immune-related lncRNAs (SNHG8, PGM5-AS1, ST20-AS1, LINC00937, AGAP2-AS1, MIR155HG, TUG1, MAPKAPK5-AS1, and HCG18) signature was identified in patients with anaplastic gliomas. Patients in the low-risk group showed longer overall survival (OS) and progression-free survival than those in the high-risk group (P < 0.0001; P < 0.0001). Additionally, patients in the high-risk group displayed no-deletion of chromosomal arms 1p and/or 19q, isocitrate dehydrogenase wild-type, classical and mesenchymal TCGA subtype, G3 CGGA subtype, and lower Karnofsky performance score (KPS). Moreover, the signature was an independent factor and was significantly associated with the OS (P = 0.000, hazard ratio (HR) = 1.434). These findings were further validated in two additional datasets (GSE16011, REMBRANDT). Low-risk and high-risk groups displayed different immune status based on principal components analysis. Our results showed that the nine immune-related lncRNAs signature has prognostic value for anaplastic gliomas.
A Model Comparison for Characterizing Protein Motions from Structure
NASA Astrophysics Data System (ADS)
David, Charles; Jacobs, Donald
2011-10-01
A comparative study is made using three computational models that characterize native state dynamics starting from known protein structures taken from four distinct SCOP classifications. A geometrical simulation is performed, and the results are compared to the elastic network model and molecular dynamics. The essential dynamics is quantified by a direct analysis of a mode subspace constructed from ANM and a principal component analysis on both the FRODA and MD trajectories using root mean square inner product and principal angles. Relative subspace sizes and overlaps are visualized using the projection of displacement vectors on the model modes. Additionally, a mode subspace is constructed from PCA on an exemplar set of X-ray crystal structures in order to determine similarly with respect to the generated ensembles. Quantitative analysis reveals there is significant overlap across the three model subspaces and the model independent subspace. These results indicate that structure is the key determinant for native state dynamics.
A single determinant dominates the rate of yeast protein evolution.
Drummond, D Allan; Raval, Alpan; Wilke, Claus O
2006-02-01
A gene's rate of sequence evolution is among the most fundamental evolutionary quantities in common use, but what determines evolutionary rates has remained unclear. Here, we carry out the first combined analysis of seven predictors (gene expression level, dispensability, protein abundance, codon adaptation index, gene length, number of protein-protein interactions, and the gene's centrality in the interaction network) previously reported to have independent influences on protein evolutionary rates. Strikingly, our analysis reveals a single dominant variable linked to the number of translation events which explains 40-fold more variation in evolutionary rate than any other, suggesting that protein evolutionary rate has a single major determinant among the seven predictors. The dominant variable explains nearly half the variation in the rate of synonymous and protein evolution. We show that the two most commonly used methods to disentangle the determinants of evolutionary rate, partial correlation analysis and ordinary multivariate regression, produce misleading or spurious results when applied to noisy biological data. We overcome these difficulties by employing principal component regression, a multivariate regression of evolutionary rate against the principal components of the predictor variables. Our results support the hypothesis that translational selection governs the rate of synonymous and protein sequence evolution in yeast.
Brown, C. Erwin
1993-01-01
Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.
de la Iglesia-Vaya, Maria; Escartí, Maria José; Molina-Mateo, Jose; Martí-Bonmatí, Luis; Gadea, Marien; Castellanos, Francisco Xavier; Aguilar García-Iturrospe, Eduardo J.; Robles, Montserrat; Biswal, Bharat B.; Sanjuan, Julio
2014-01-01
Auditory hallucinations (AH) are the most frequent positive symptoms in patients with schizophrenia. Hallucinations have been related to emotional processing disturbances, altered functional connectivity and effective connectivity deficits. Previously, we observed that, compared to healthy controls, the limbic network responses of patients with auditory hallucinations differed when the subjects were listening to emotionally charged words. We aimed to compare the synchrony patterns and effective connectivity of task-related networks between schizophrenia patients with and without AH and healthy controls. Schizophrenia patients with AH (n = 27) and without AH (n = 14) were compared with healthy participants (n = 31). We examined functional connectivity by analyzing correlations and cross-correlations among previously detected independent component analysis time courses. Granger causality was used to infer the information flow direction in the brain regions. The results demonstrate that the patterns of cortico-cortical functional synchrony differentiated the patients with AH from the patients without AH and from the healthy participants. Additionally, Granger-causal relationships between the networks clearly differentiated the groups. In the patients with AH, the principal causal source was an occipital–cerebellar component, versus a temporal component in the patients without AH and the healthy controls. These data indicate that an anomalous process of neural connectivity exists when patients with AH process emotional auditory stimuli. Additionally, a central role is suggested for the cerebellum in processing emotional stimuli in patients with persistent AH. PMID:25379429
Zhang, Xingyu; Kim, Joyce; Patzer, Rachel E; Pitts, Stephen R; Patzer, Aaron; Schrager, Justin D
2017-10-26
To describe and compare logistic regression and neural network modeling strategies to predict hospital admission or transfer following initial presentation to Emergency Department (ED) triage with and without the addition of natural language processing elements. Using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS), a cross-sectional probability sample of United States EDs from 2012 and 2013 survey years, we developed several predictive models with the outcome being admission to the hospital or transfer vs. discharge home. We included patient characteristics immediately available after the patient has presented to the ED and undergone a triage process. We used this information to construct logistic regression (LR) and multilayer neural network models (MLNN) which included natural language processing (NLP) and principal component analysis from the patient's reason for visit. Ten-fold cross validation was used to test the predictive capacity of each model and receiver operating curves (AUC) were then calculated for each model. Of the 47,200 ED visits from 642 hospitals, 6,335 (13.42%) resulted in hospital admission (or transfer). A total of 48 principal components were extracted by NLP from the reason for visit fields, which explained 75% of the overall variance for hospitalization. In the model including only structured variables, the AUC was 0.824 (95% CI 0.818-0.830) for logistic regression and 0.823 (95% CI 0.817-0.829) for MLNN. Models including only free-text information generated AUC of 0.742 (95% CI 0.731- 0.753) for logistic regression and 0.753 (95% CI 0.742-0.764) for MLNN. When both structured variables and free text variables were included, the AUC reached 0.846 (95% CI 0.839-0.853) for logistic regression and 0.844 (95% CI 0.836-0.852) for MLNN. The predictive accuracy of hospital admission or transfer for patients who presented to ED triage overall was good, and was improved with the inclusion of free text data from a patient's reason for visit regardless of modeling approach. Natural language processing and neural networks that incorporate patient-reported outcome free text may increase predictive accuracy for hospital admission.
Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo
2016-07-12
Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
Bhakat, Soumendranath; Martin, Alberto J M; Soliman, Mahmoud E S
2014-08-01
The emergence of different drug resistant strains of HIV-1 reverse transcriptase (HIV RT) remains of prime interest in relation to viral pathogenesis as well as drug development. Amongst those mutations, M184V was found to cause a complete loss of ligand fitness. In this study, we report the first account of the molecular impact of M184V mutation on HIV RT resistance to 3TC (lamivudine) using an integrated computational approach. This involved molecular dynamics simulation, binding free energy analysis, principle component analysis (PCA) and residue interaction networks (RINs). Results clearly confirmed that M184V mutation leads to steric conflict between 3TC and the beta branched side chain of valine, decreases the ligand (3TC) binding affinity by ∼7 kcal mol(-1) when compared to the wild type, changes the overall conformational landscape of the protein and distorts the native enzyme residue-residue interaction network. The comprehensive molecular insight gained from this study should be of great importance in understanding drug resistance against HIV RT as well as assisting in the design of novel reverse transcriptase inhibitors with high ligand efficacy on resistant strains.
Han, Sheng-Nan
2014-07-01
Chemometrics is a new branch of chemistry which is widely applied to various fields of analytical chemistry. Chemometrics can use theories and methods of mathematics, statistics, computer science and other related disciplines to optimize the chemical measurement process and maximize access to acquire chemical information and other information on material systems by analyzing chemical measurement data. In recent years, traditional Chinese medicine has attracted widespread attention. In the research of traditional Chinese medicine, it has been a key problem that how to interpret the relationship between various chemical components and its efficacy, which seriously restricts the modernization of Chinese medicine. As chemometrics brings the multivariate analysis methods into the chemical research, it has been applied as an effective research tool in the composition-activity relationship research of Chinese medicine. This article reviews the applications of chemometrics methods in the composition-activity relationship research in recent years. The applications of multivariate statistical analysis methods (such as regression analysis, correlation analysis, principal component analysis, etc. ) and artificial neural network (such as back propagation artificial neural network, radical basis function neural network, support vector machine, etc. ) are summarized, including the brief fundamental principles, the research contents and the advantages and disadvantages. Finally, the existing main problems and prospects of its future researches are proposed.
PCA leverage: outlier detection for high-dimensional functional magnetic resonance imaging data.
Mejia, Amanda F; Nebel, Mary Beth; Eloyan, Ani; Caffo, Brian; Lindquist, Martin A
2017-07-01
Outlier detection for high-dimensional (HD) data is a popular topic in modern statistical research. However, one source of HD data that has received relatively little attention is functional magnetic resonance images (fMRI), which consists of hundreds of thousands of measurements sampled at hundreds of time points. At a time when the availability of fMRI data is rapidly growing-primarily through large, publicly available grassroots datasets-automated quality control and outlier detection methods are greatly needed. We propose principal components analysis (PCA) leverage and demonstrate how it can be used to identify outlying time points in an fMRI run. Furthermore, PCA leverage is a measure of the influence of each observation on the estimation of principal components, which are often of interest in fMRI data. We also propose an alternative measure, PCA robust distance, which is less sensitive to outliers and has controllable statistical properties. The proposed methods are validated through simulation studies and are shown to be highly accurate. We also conduct a reliability study using resting-state fMRI data from the Autism Brain Imaging Data Exchange and find that removal of outliers using the proposed methods results in more reliable estimation of subject-level resting-state networks using independent components analysis. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
40 CFR 62.14505 - What are the principal components of this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 8 2010-07-01 2010-07-01 false What are the principal components of this subpart? 62.14505 Section 62.14505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... components of this subpart? This subpart contains the eleven major components listed in paragraphs (a...
Spatial and temporal analysis of the total electron content over China during 2011-2014
NASA Astrophysics Data System (ADS)
Zheng, Jianchang; Zhao, Biqiang; Xiong, Bo; Wan, Weixing
2016-06-01
In the present work we investigate variations of ionospheric total electron content (TEC) with empirical orthogonal function (EOF) analysis, the four-year TEC data are derived from ∼250 GPS observations of the crustal movement observation network of China (CMONOC) over East Asian area (30-55°N, 70-140°E) during the period from 2011, January to 2014, December. The first two EOF components together account for ∼93.78% of total variance of the original TEC data set, and it is found that the first EOF component represents a spatial variability of semi-annual variation and the second EOF component exhibits pronounced east-west longitudinal difference with respect to zero valued geomagnetic declination line. In addition, climatology of the vertical plasma drift velocity vdz induced by HWM zonal wind field (∼300 km) are studied in the paper. Results shows vdz displays significant east-west longitudinal difference at 10:00 LT and 20:00 LT, and its daytime temporal variation is consistent with the second EOF principal component, which suggests that the east-west longitudinal variability is partly caused by the thermospheric zonal wind and geomagnetic declination. It is expected that with this dense GPS network, local ionospheric variability can be described more accurately and a more realistic ionospheric model can be constructed and used for the satellite navigation and radio propagation.
Hoshino, Osamu
2006-12-01
Although details of cortical interneurons in anatomy and physiology have been well understood, little is known about how they contribute to ongoing spontaneous neuronal activity that could have a great impact on subsequent neuronal information processing. Simulating a cortical neural network model of an early sensory area, we investigated whether and how two distinct types of inhibitory interneurons, or fast-spiking interneurons with narrow axonal arbors and slow-spiking interneurons with wide axonal arbors, have a spatiotemporal influence on the ongoing activity of principal cells and subsequent cognitive information processing. In the model, dynamic cell assemblies, or population activation of principal cells, expressed information about specific sensory features. Within cell assemblies, fast-spiking interneurons give a feedback inhibitory effect on principal cells. Between cell assemblies, slow-spiking interneurons give a lateral inhibitory effect on principal cells. Here, we show that these interneurons keep the network at a subthreshold level for action potential generation under the ongoing state, by which the reaction time of principal cells to sensory stimulation could be accelerated. We suggest that the best timing of inhibition mediated by fast-spiking interneurons and slow-spiking interneurons allows the network to remain near threshold for rapid responses to input.
Transforming Equity-Oriented Leaders: Principal Residency Network Program Evaluation
ERIC Educational Resources Information Center
Braun, Donna; Billups, Felice D.; Gable, Robert K.
2013-01-01
After 12 years focused on developing school leaders who act as change agents for educational equity, the Principal Residency Network (PRN) partnered with Johnson and Wales University's Center for Research and Evaluation to conduct a utilization-focused (Patton, 2002) program evaluation funded by a grant from the Rhode Island Foundation. The PRN…
System Biology Approach: Gene Network Analysis for Muscular Dystrophy.
Censi, Federica; Calcagnini, Giovanni; Mattei, Eugenio; Giuliani, Alessandro
2018-01-01
Phenotypic changes at different organization levels from cell to entire organism are associated to changes in the pattern of gene expression. These changes involve the entire genome expression pattern and heavily rely upon correlation patterns among genes. The classical approach used to analyze gene expression data builds upon the application of supervised statistical techniques to detect genes differentially expressed among two or more phenotypes (e.g., normal vs. disease). The use of an a posteriori, unsupervised approach based on principal component analysis (PCA) and the subsequent construction of gene correlation networks can shed a light on unexpected behaviour of gene regulation system while maintaining a more naturalistic view on the studied system.In this chapter we applied an unsupervised method to discriminate DMD patient and controls. The genes having the highest absolute scores in the discrimination between the groups were then analyzed in terms of gene expression networks, on the basis of their mutual correlation in the two groups. The correlation network structures suggest two different modes of gene regulation in the two groups, reminiscent of important aspects of DMD pathogenesis.
Rautureau, S; Dufour, B; Durand, B
2012-07-01
The networks generated by live animal movements are the principal vector for the propagation of infectious agents between farms, and their topology strongly affects how fast a disease may spread. The structural characteristics of networks may thus provide indicators of network vulnerability to the spread of infectious disease. This study applied social network analysis methods to describe the French swine trade network. Initial analysis involved calculating several parameters to characterize networks and then identifying high-risk subgroups of holdings for different time scales. Holding-specific centrality measurements ('degree', 'betweenness' and 'ingoing infection chain'), which summarize the place and the role of holdings in the network, were compared according to the production type. In addition, network components and communities, areas where connectedness is particularly high and could influence the speed and the extent of a disease, were identified and analysed. Dealer holdings stood out because of their high centrality values suggesting that these holdings may control the flow of animals in part of the network. Herds with growing units had higher values for degree and betweenness centrality, representing central positions for both spreading and receiving disease, whereas herds with finishing units had higher values for in-degree and ingoing infection chain centrality values and appeared more vulnerable with many contacts through live animal movements and thus at potentially higher risk for introduction of contagious diseases. This reflects the dynamics of the swine trade with downward movements along the production chain. But, the significant heterogeneity of farms with several production units did not reveal any particular type of production for targeting disease surveillance or control. Besides, no giant strong connected component was observed, the network being rather organized according to communities of small or medium size (<20% of network size). Because of this fragmentation, the swine trade network appeared less structurally vulnerable than ruminant trade networks. This fragmentation is explained by the hierarchical structure, which thus limits the structural vulnerability of the global trade network. However, inside communities, the hierarchical structure of the swine production system would favour the spread of an infectious agent (especially if introduced in breeding herds).
SpectralNET – an application for spectral graph analysis and visualization
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-01-01
Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170
SpectralNET--an application for spectral graph analysis and visualization.
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-10-19
Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.
Evaluation of Deep Learning Representations of Spatial Storm Data
NASA Astrophysics Data System (ADS)
Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.
2017-12-01
The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.
Distributed framework for dyanmic telescope and instrument control
NASA Astrophysics Data System (ADS)
Ames, Troy J.; Case, Lynne
2003-02-01
Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see http://www.jxta.org) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a devices IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have been working with the Submillimetre High
Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki
2004-04-01
We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.
Principals' Leadership Network. Focusing on the Image of the Principal
ERIC Educational Resources Information Center
Newby, Cheryl Riggins; Hayden, Hal
2004-01-01
A recent study by The National Association of Elementary School Principals (NAESP), Principals in the Public: Engaging Community Support (2000) found that communication, marketing, public affairs and public relations and engagement activities are now given more time and importance than ever before. According to the study, public support builds…
Principals' Perceptions Regarding Their Supervision and Evaluation
ERIC Educational Resources Information Center
Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann
2015-01-01
This study examined the perceptions of principals concerning principal evaluation and supervisory feedback. Principals were asked two open-ended questions. Respondents included 82 principals in the Rocky Mountain region. The emerging themes were "Superintendent Performance," "Principal Evaluation Components," "Specific…
Cheng, Lin; Zhu, Yang; Sun, Junfeng; Deng, Lifu; He, Naying; Yang, Yang; Ling, Huawei; Ayaz, Hasan; Fu, Yi; Tong, Shanbao
2018-01-25
Task-related reorganization of functional connectivity (FC) has been widely investigated. Under classic static FC analysis, brain networks under task and rest have been demonstrated a general similarity. However, brain activity and cognitive process are believed to be dynamic and adaptive. Since static FC inherently ignores the distinct temporal patterns between rest and task, dynamic FC may be more a suitable technique to characterize the brain's dynamic and adaptive activities. In this study, we adopted [Formula: see text]-means clustering to investigate task-related spatiotemporal reorganization of dynamic brain networks and hypothesized that dynamic FC would be able to reveal the link between resting-state and task-state brain organization, including broadly similar spatial patterns but distinct temporal patterns. In order to test this hypothesis, this study examined the dynamic FC in default-mode network (DMN) and motor-related network (MN) using Blood-Oxygenation-Level-Dependent (BOLD)-fMRI data from 26 healthy subjects during rest (REST) and a hand closing-and-opening (HCO) task. Two principal FC states in REST and one principal FC state in HCO were identified. The first principal FC state in REST was found similar to that in HCO, which appeared to represent intrinsic network architecture and validated the broadly similar spatial patterns between REST and HCO. However, the second FC principal state in REST with much shorter "dwell time" implied the transient functional relationship between DMN and MN during REST. In addition, a more frequent shifting between two principal FC states indicated that brain network dynamically maintained a "default mode" in the motor system during REST, whereas the presence of a single principal FC state and reduced FC variability implied a more temporally stable connectivity during HCO, validating the distinct temporal patterns between REST and HCO. Our results further demonstrated that dynamic FC analysis could offer unique insights in understanding how the brain reorganizes itself during rest and task states, and the ways in which the brain adaptively responds to the cognitive requirements of tasks.
Classification of white wine aromas with an electronic nose.
Lozano, J; Santos, J P; Horrillo, M C
2005-09-15
This paper reports the use of a tin dioxide multisensor array based electronic nose for recognition of 29 typical aromas in white wine. Headspace technique has been used to extract aroma of the wine. Multivariate analysis, including principal component analysis (PCA) as well as probabilistic neural networks (PNNs), has been used to identify the main aroma added to the wine. The results showed that in spite of the strong influence of ethanol and other majority compounds of wine, the system could discriminate correctly the aromatic compounds added to the wine with a minimum accuracy of 97.2%.
Comparison of three chemometrics methods for near-infrared spectra of glucose in the whole blood
NASA Astrophysics Data System (ADS)
Zhang, Hongyan; Ding, Dong; Li, Xin; Chen, Yu; Tang, Yuguo
2005-01-01
Principal Component Regression (PCR), Partial Least Square (PLS) and Artificial Neural Networks (ANN) methods are used in the analysis for the near infrared (NIR) spectra of glucose in the whole blood. The calibration model is built up in the spectrum band where there are the glucose has much more spectral absorption than the water, fat, and protein with these methods and the correlation coefficients of the model are showed in this paper. Comparing these results, a suitable method to analyze the glucose NIR spectrum in the whole blood is found.
Spectral discrimination of serum from liver cancer and liver cirrhosis using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Yang, Tianyue; Li, Xiaozhou; Yu, Ting; Sun, Ruomin; Li, Siqi
2011-07-01
In this paper, Raman spectra of human serum were measured using Raman spectroscopy, then the spectra was analyzed by multivariate statistical methods of principal component analysis (PCA). Then linear discriminant analysis (LDA) was utilized to differentiate the loading score of different diseases as the diagnosing algorithm. Artificial neural network (ANN) was used for cross-validation. The diagnosis sensitivity and specificity by PCA-LDA are 88% and 79%, while that of the PCA-ANN are 89% and 95%. It can be seen that modern analyzing method is a useful tool for the analysis of serum spectra for diagnosing diseases.
Localisation of an Unknown Number of Land Mines Using a Network of Vapour Detectors
Chhadé, Hiba Haj; Abdallah, Fahed; Mougharbel, Imad; Gning, Amadou; Julier, Simon; Mihaylova, Lyudmila
2014-01-01
We consider the problem of localising an unknown number of land mines using concentration information provided by a wireless sensor network. A number of vapour sensors/detectors, deployed in the region of interest, are able to detect the concentration of the explosive vapours, emanating from buried land mines. The collected data is communicated to a fusion centre. Using a model for the transport of the explosive chemicals in the air, we determine the unknown number of sources using a Principal Component Analysis (PCA)-based technique. We also formulate the inverse problem of determining the positions and emission rates of the land mines using concentration measurements provided by the wireless sensor network. We present a solution for this problem based on a probabilistic Bayesian technique using a Markov chain Monte Carlo sampling scheme, and we compare it to the least squares optimisation approach. Experiments conducted on simulated data show the effectiveness of the proposed approach. PMID:25384008
NASA Astrophysics Data System (ADS)
Cheng, Jin-ying; Xu, Liang; Lü, Guo-dong; Tang, Jun; Mo, Jia-qing; Lü, Xiao-yi; Gao, Zhi-xian
2017-01-01
A Raman spectroscopy method combined with neural network is used for the invasive and rapid detection of echinococcosis. The Raman spectroscopy measurements are performed on two groups of blood serum samples, which are from 28 echinococcosis patients and 38 healthy persons, respectively. The normalized Raman reflection spectra show that the reflectivity of the echinococcosis blood serum is higher than that of the normal human blood serum in the wavelength ranges of 101—175 nm and 1 801—2 701 nm. Then the principal component analysis (PCA) and back propagation neural network (BPNN) model are used to obtain the diagnosis results. The diagnosis rates for healthy persons and echinococcosis persons are 93.333 3% and 90.909 1%, respectively, so the average final diagnosis rate is 92.121 2%. The results demonstrate that the Raman spectroscopy analysis of blood serum combined with PCA-BPNN has considerable potential for the non-invasive and rapid detection of echinococcosis.
NASA Astrophysics Data System (ADS)
Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong
2016-05-01
The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.
NASA Astrophysics Data System (ADS)
Singal, J.; Shmakova, M.; Gerke, B.; Griffith, R. L.; Lotz, J.
2011-05-01
We present a determination of the effects of including galaxy morphological parameters in photometric redshift estimation with an artificial neural network method. Neural networks, which recognize patterns in the information content of data in an unbiased way, can be a useful estimator of the additional information contained in extra parameters, such as those describing morphology, if the input data are treated on an equal footing. We use imaging and five band photometric magnitudes from the All-wavelength Extended Groth Strip International Survey (AEGIS). It is shown that certain principal components of the morphology information are correlated with galaxy type. However, we find that for the data used the inclusion of morphological information does not have a statistically significant benefit for photometric redshift estimation with the techniques employed here. The inclusion of these parameters may result in a tradeoff between extra information and additional noise, with the additional noise becoming more dominant as more parameters are added.
Asynchronous Gossip for Averaging and Spectral Ranking
NASA Astrophysics Data System (ADS)
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Line width measurement below 60 nm using an optical interferometer and artificial neural network
NASA Astrophysics Data System (ADS)
See, Chung W.; Smith, Richard J.; Somekh, Michael G.; Yacoot, Andrew
2007-03-01
We have recently described a technique for optical line-width measurements. The system currently is capable of measuring line-width down to 60 nm with a precision of 2 nm, and potentially should be able to measure down to 10nm. The system consists of an ultra-stable interferometer and artificial neural networks (ANNs). The former is used to generate optical profiles which are input to the ANNs. The outputs of the ANNs are the desired sample parameters. Different types of samples have been tested with equally impressive results. In this paper we will discuss the factors that are essential to extend the application of the technique. Two of the factors are signal conditioning and sample classification. Methods, including principal component analysis, that are capable of performing these tasks will be considered.
Temporal stability in human interaction networks
NASA Astrophysics Data System (ADS)
Fabbri, Renato; Fabbri, Ricardo; Antunes, Deborah Christina; Pisani, Marilia Mello; de Oliveira, Osvaldo Novais
2017-11-01
This paper reports on stable (or invariant) properties of human interaction networks, with benchmarks derived from public email lists. Activity, recognized through messages sent, along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free trace, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdös-Rényi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, < 15% of the vertices are hubs, 15%-45% are intermediary and > 45% are peripheral vertices. Similar results for the distribution of participants in the three sectors and for the relative importance of the topological metrics were obtained for 12 additional networks from Facebook, Twitter and ParticipaBR. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria.
Interdependencies and Causalities in Coupled Financial Networks
Vodenska, Irena; Aoyama, Hideaki; Fujiwara, Yoshi; Iyetomi, Hiroshi; Arai, Yuta
2016-01-01
We explore the foreign exchange and stock market networks for 48 countries from 1999 to 2012 and propose a model, based on complex Hilbert principal component analysis, for extracting significant lead-lag relationships between these markets. The global set of countries, including large and small countries in Europe, the Americas, Asia, and the Middle East, is contrasted with the limited scopes of targets, e.g., G5, G7 or the emerging Asian countries, adopted by previous works. We construct a coupled synchronization network, perform community analysis, and identify formation of four distinct network communities that are relatively stable over time. In addition to investigating the entire period, we divide the time period into into “mild crisis,” (1999–2002), “calm,” (2003–2006) and “severe crisis” (2007–2012) sub-periods and find that the severe crisis period behavior dominates the dynamics in the foreign exchange-equity synchronization network. We observe that in general the foreign exchange market has predictive power for the global stock market performances. In addition, the United States, German and Mexican markets have forecasting power for the performances of other global equity markets. PMID:26977806
Nguyen, Phuong H
2007-05-15
Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.
Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.
Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D
2018-05-10
The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.
Liu, Hui-lin; Wan, Xia; Yang, Gong-huan
2013-02-01
To explore the relationship between the strength of tobacco control and the effectiveness of creating smoke-free hospital, and summarize the main factors that affect the program of creating smoke-free hospitals. A total of 210 hospitals from 7 provinces/municipalities directly under the central government were enrolled in this study using stratified random sampling method. Principle component analysis and regression analysis were conducted to analyze the strength of tobacco control and the effectiveness of creating smoke-free hospitals. Two principal components were extracted in the strength of tobacco control index, which respectively reflected the tobacco control policies and efforts, and the willingness and leadership of hospital managers regarding tobacco control. The regression analysis indicated that only the first principal component was significantly correlated with the progression in creating smoke-free hospital (P<0.001), i.e. hospitals with higher scores on the first principal component had better achievements in smoke-free environment creation. Tobacco control policies and efforts are critical in creating smoke-free hospitals. The principal component analysis provides a comprehensive and objective tool for evaluating the creation of smoke-free hospitals.
Critical Factors Explaining the Leadership Performance of High-Performing Principals
ERIC Educational Resources Information Center
Hutton, Disraeli M.
2018-01-01
The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…
Molecular dynamics in principal component space.
Michielssens, Servaas; van Erp, Titus S; Kutzner, Carsten; Ceulemans, Arnout; de Groot, Bert L
2012-07-26
A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.
Optimized principal component analysis on coronagraphic images of the fomalhaut system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.
We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less
[A study of Boletus bicolor from different areas using Fourier transform infrared spectrometry].
Zhou, Zai-Jin; Liu, Gang; Ren, Xian-Pei
2010-04-01
It is hard to differentiate the same species of wild growing mushrooms from different areas by macromorphological features. In this paper, Fourier transform infrared (FTIR) spectroscopy combined with principal component analysis was used to identify 58 samples of boletus bicolor from five different areas. Based on the fingerprint infrared spectrum of boletus bicolor samples, principal component analysis was conducted on 58 boletus bicolor spectra in the range of 1 350-750 cm(-1) using the statistical software SPSS 13.0. According to the result, the accumulated contributing ratio of the first three principal components accounts for 88.87%. They included almost all the information of samples. The two-dimensional projection plot using first and second principal component is a satisfactory clustering effect for the classification and discrimination of boletus bicolor. All boletus bicolor samples were divided into five groups with a classification accuracy of 98.3%. The study demonstrated that wild growing boletus bicolor at species level from different areas can be identified by FTIR spectra combined with principal components analysis.
(Re)Thinking Teacher Networking in the Russian Federation
ERIC Educational Resources Information Center
Lapham, Kate; Lindemann-Komarova, Sarah
2013-01-01
This article presents the findings of research in Russia on the degree to which teachers and school principals are active and how they currently network with their colleagues. It builds on the work of David Frost and John Bangs (2012) on teacher self-efficacy using a survey and semi-structured interviews with teachers and principals to collect…
Strategies for reducing large fMRI data sets for independent component analysis.
Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R
2006-06-01
In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.
CHAI, Lian En; LAW, Chow Kuan; MOHAMAD, Mohd Saberi; CHONG, Chuii Khim; CHOON, Yee Wen; DERIS, Safaai; ILLIAS, Rosli Md
2014-01-01
Background: Gene expression data often contain missing expression values. Therefore, several imputation methods have been applied to solve the missing values, which include k-nearest neighbour (kNN), local least squares (LLS), and Bayesian principal component analysis (BPCA). However, the effects of these imputation methods on the modelling of gene regulatory networks from gene expression data have rarely been investigated and analysed using a dynamic Bayesian network (DBN). Methods: In the present study, we separately imputed datasets of the Escherichia coli S.O.S. DNA repair pathway and the Saccharomyces cerevisiae cell cycle pathway with kNN, LLS, and BPCA, and subsequently used these to generate gene regulatory networks (GRNs) using a discrete DBN. We made comparisons on the basis of previous studies in order to select the gene network with the least error. Results: We found that BPCA and LLS performed better on larger networks (based on the S. cerevisiae dataset), whereas kNN performed better on smaller networks (based on the E. coli dataset). Conclusion: The results suggest that the performance of each imputation method is dependent on the size of the dataset, and this subsequently affects the modelling of the resultant GRNs using a DBN. In addition, on the basis of these results, a DBN has the capacity to discover potential edges, as well as display interactions, between genes. PMID:24876803
How multi segmental patterns deviate in spastic diplegia from typical developed.
Zago, Matteo; Sforza, Chiarella; Bona, Alessia; Cimolin, Veronica; Costici, Pier Francesco; Condoluci, Claudia; Galli, Manuela
2017-10-01
The relationship between gait features and coordination in children with Cerebral Palsy is not sufficiently analyzed yet. Principal Component Analysis can help in understanding motion patterns decomposing movement into its fundamental components (Principal Movements). This study aims at quantitatively characterizing the functional connections between multi-joint gait patterns in Cerebral Palsy. 65 children with spastic diplegia aged 10.6 (SD 3.7) years participated in standardized gait analysis trials; 31 typically developing adolescents aged 13.6 (4.4) years were also tested. To determine if posture affects gait patterns, patients were split into Crouch and knee Hyperextension group according to knee flexion angle at standing. 3D coordinates of hips, knees, ankles, metatarsal joints, pelvis and shoulders were submitted to Principal Component Analysis. Four Principal Movements accounted for 99% of global variance; components 1-3 explained major sagittal patterns, components 4-5 referred to movements on frontal plane and component 6 to additional movement refinements. Dimensionality was higher in patients than in controls (p<0.01), and the Crouch group significantly differed from controls in the application of components 1 and 4-6 (p<0.05), while the knee Hyperextension group in components 1-2 and 5 (p<0.05). Compensatory strategies of children with Cerebral Palsy (interactions between main and secondary movement patterns), were objectively determined. Principal Movements can reduce the effort in interpreting gait reports, providing an immediate and quantitative picture of the connections between movement components. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Williams, D. L.; Borden, F. Y.
1977-01-01
Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.
Constrained Principal Component Analysis: Various Applications.
ERIC Educational Resources Information Center
Hunter, Michael; Takane, Yoshio
2002-01-01
Provides example applications of constrained principal component analysis (CPCA) that illustrate the method on a variety of contexts common to psychological research. Two new analyses, decompositions into finer components and fitting higher order structures, are presented, followed by an illustration of CPCA on contingency tables and the CPCA of…
Using principal component analysis for selecting network behavioral anomaly metrics
NASA Astrophysics Data System (ADS)
Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex
2010-04-01
This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.
Zhan, Liang; Zhou, Jiayu; Wang, Yalin; Jin, Yan; Jahanshad, Neda; Prasad, Gautam; Nir, Talia M.; Leonardo, Cassandra D.; Ye, Jieping; Thompson, Paul M.; for the Alzheimer’s Disease Neuroimaging Initiative
2015-01-01
Alzheimer’s disease (AD) involves a gradual breakdown of brain connectivity, and network analyses offer a promising new approach to track and understand disease progression. Even so, our ability to detect degenerative changes in brain networks depends on the methods used. Here we compared several tractography and feature extraction methods to see which ones gave best diagnostic classification for 202 people with AD, mild cognitive impairment or normal cognition, scanned with 41-gradient diffusion-weighted magnetic resonance imaging as part of the Alzheimer’s Disease Neuroimaging Initiative (ADNI) project. We computed brain networks based on whole brain tractography with nine different methods – four of them tensor-based deterministic (FACT, RK2, SL, and TL), two orientation distribution function (ODF)-based deterministic (FACT, RK2), two ODF-based probabilistic approaches (Hough and PICo), and one “ball-and-stick” approach (Probtrackx). Brain networks derived from different tractography algorithms did not differ in terms of classification performance on ADNI, but performing principal components analysis on networks helped classification in some cases. Small differences may still be detectable in a truly vast cohort, but these experiments help assess the relative advantages of different tractography algorithms, and different post-processing choices, when used for classification. PMID:25926791
Particulate matter in the rural settlement during winter time
NASA Astrophysics Data System (ADS)
Olszowski, Tomasz
2017-10-01
The objective of this study was to analyzed the variability of the ambient particulates mass concentration in an area occupied by rural development. The analysis applied daily and hourly PM2.5 and PM10 levels. Data were derived on the basis of measurement results with the application of stationary gravimetric samplers and optical dust meter. The obtained data were compared with the results from the urban air quality monitoring network in Opole. Principal Component Analysis was used for data analysis. Research hypotheses were checked using U Mann-Whitney. It was indicated that during the smog episodes, the ratio of the inhalable dust fraction in the rural aerosol is greater than for the case of the urban aerosol. It was established that the principal meteorological factors affecting the local air quality. Air temperature, atmospheric pressure, movement of air masses and occurrence of precipitation are the most important. It was demonstrated that the during the temperature inversion phenomenon, the values of the hourly and daily mass concentration of PM2.5 and PM10 are very improper. The decrease of the PM's concentration to a safe level is principally relative to the occurrence of wind and precipitation.
Schultz, K K; Bennett, T B; Nordlund, K V; Döpfer, D; Cook, N B
2016-09-01
Transition cow management has been tracked via the Transition Cow Index (TCI; AgSource Cooperative Services, Verona, WI) since 2006. Transition Cow Index was developed to measure the difference between actual and predicted milk yield at first test day to evaluate the relative success of the transition period program. This project aimed to assess TCI in relation to all commonly used Dairy Herd Improvement (DHI) metrics available through AgSource Cooperative Services. Regression analysis was used to isolate variables that were relevant to TCI, and then principal components analysis and network analysis were used to determine the relative strength and relatedness among variables. Finally, cluster analysis was used to segregate herds based on similarity of relevant variables. The DHI data were obtained from 2,131 Wisconsin dairy herds with test-day mean ≥30 cows, which were tested ≥10 times throughout the 2014 calendar year. The original list of 940 DHI variables was reduced through expert-driven selection and regression analysis to 23 variables. The K-means cluster analysis produced 5 distinct clusters. Descriptive statistics were calculated for the 23 variables per cluster grouping. Using principal components analysis, cluster analysis, and network analysis, 4 parameters were isolated as most relevant to TCI; these were energy-corrected milk, 3 measures of intramammary infection (dry cow cure rate, linear somatic cell count score in primiparous cows, and new infection rate), peak ratio, and days in milk at peak milk production. These variables together with cow and newborn calf survival measures form a group of metrics that can be used to assist in the evaluation of overall transition period performance. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ginanjar, Irlandia; Pasaribu, Udjianna S.; Indratno, Sapto W.
2017-03-01
This article presents the application of the principal component analysis (PCA) biplot for the needs of data mining. This article aims to simplify and objectify the methods for objects clustering in PCA biplot. The novelty of this paper is to get a measure that can be used to objectify the objects clustering in PCA biplot. Orthonormal eigenvectors, which are the coefficients of a principal component model representing an association between principal components and initial variables. The existence of the association is a valid ground to objects clustering based on principal axes value, thus if m principal axes used in the PCA, then the objects can be classified into 2m clusters. The inter-city buses are clustered based on maintenance costs data by using two principal axes PCA biplot. The buses are clustered into four groups. The first group is the buses with high maintenance costs, especially for lube, and brake canvass. The second group is the buses with high maintenance costs, especially for tire, and filter. The third group is the buses with low maintenance costs, especially for lube, and brake canvass. The fourth group is buses with low maintenance costs, especially for tire, and filter.
Kakio, Tomoko; Nagase, Hitomi; Takaoka, Takashi; Yoshida, Naoko; Hirakawa, Junichi; Macha, Susan; Hiroshima, Takashi; Ikeda, Yukihiro; Tsuboi, Hirohito; Kimura, Kazuko
2018-06-01
The World Health Organization has warned that substandard and falsified medical products (SFs) can harm patients and fail to treat the diseases for which they were intended, and they affect every region of the world, leading to loss of confidence in medicines, health-care providers, and health systems. Therefore, development of analytical procedures to detect SFs is extremely important. In this study, we investigated the quality of pharmaceutical tablets containing the antihypertensive candesartan cilexetil, collected in China, Indonesia, Japan, and Myanmar, using the Japanese pharmacopeial analytical procedures for quality control, together with principal component analysis (PCA) of Raman spectrum obtained with handheld Raman spectrometer. Some samples showed delayed dissolution and failed to meet the pharmacopeial specification, whereas others failed the assay test. These products appeared to be substandard. Principal component analysis showed that all Raman spectra could be explained in terms of two components: the amount of the active pharmaceutical ingredient and the kinds of excipients. Principal component analysis score plot indicated one substandard, and the falsified tablets have similar principal components in Raman spectra, in contrast to authentic products. The locations of samples within the PCA score plot varied according to the source country, suggesting that manufacturers in different countries use different excipients. Our results indicate that the handheld Raman device will be useful for detection of SFs in the field. Principal component analysis of that Raman data clarify the difference in chemical properties between good quality products and SFs that circulate in the Asian market.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
Managing Stress in the Principalship.
ERIC Educational Resources Information Center
Lyons, James E.
1990-01-01
The principal's office is frequently a collection point for problems and demands. Secondary school principals often average 1,000 interactions daily. Principals can manage stress by declining to solve every problem, delegating responsibility, reexamining their supervisory role, developing networks of trusted friends, and engaging in…
Kalgin, Igor V.; Caflisch, Amedeo; Chekmarev, Sergei F.; Karplus, Martin
2013-01-01
A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted Principal Component Analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed and it is complemented by a hydrodynamic analysis. The latter, making use of “passive tracers” in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface, but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this “hydrogen atom” of protein folding. PMID:23621790
An evaluation of independent component analyses with an application to resting-state fMRI
Matteson, David S.; Ruppert, David; Eloyan, Ani; Caffo, Brian S.
2013-01-01
Summary We examine differences between independent component analyses (ICAs) arising from different as-sumptions, measures of dependence, and starting points of the algorithms. ICA is a popular method with diverse applications including artifact removal in electrophysiology data, feature extraction in microarray data, and identifying brain networks in functional magnetic resonance imaging (fMRI). ICA can be viewed as a generalization of principal component analysis (PCA) that takes into account higher-order cross-correlations. Whereas the PCA solution is unique, there are many ICA methods–whose solutions may differ. Infomax, FastICA, and JADE are commonly applied to fMRI studies, with FastICA being arguably the most popular. Hastie and Tibshirani (2003) demonstrated that ProDenICA outperformed FastICA in simulations with two components. We introduce the application of ProDenICA to simulations with more components and to fMRI data. ProDenICA was more accurate in simulations, and we identified differences between biologically meaningful ICs from ProDenICA versus other methods in the fMRI analysis. ICA methods require nonconvex optimization, yet current practices do not recognize the importance of, nor adequately address sensitivity to, initial values. We found that local optima led to dramatically different estimates in both simulations and group ICA of fMRI, and we provide evidence that the global optimum from ProDenICA is the best estimate. We applied a modification of the Hungarian (Kuhn-Munkres) algorithm to match ICs from multiple estimates, thereby gaining novel insights into how brain networks vary in their sensitivity to initial values and ICA method. PMID:24350655
Rinaldi, Maurizio; Gindro, Roberto; Barbeni, Massimo; Allegrone, Gianna
2009-01-01
Orange (Citrus sinensis L.) juice comprises a complex mixture of volatile components that are difficult to identify and quantify. Classification and discrimination of the varieties on the basis of the volatile composition could help to guarantee the quality of a juice and to detect possible adulteration of the product. To provide information on the amounts of volatile constituents in fresh-squeezed juices from four orange cultivars and to establish suitable discrimination rules to differentiate orange juices using new chemometric approaches. Fresh juices of four orange cultivars were analysed by headspace solid-phase microextraction (HS-SPME) coupled with GC-MS. Principal component analysis, linear discriminant analysis and heuristic methods, such as neural networks, allowed clustering of the data from HS-SPME analysis while genetic algorithms addressed the problem of data reduction. To check the quality of the results the chemometric techniques were also evaluated on a sample. Thirty volatile compounds were identified by HS-SPME and GC-MS analyses and their relative amounts calculated. Differences in composition of orange juice volatile components were observed. The chosen orange cultivars could be discriminated using neural networks, genetic relocation algorithms and linear discriminant analysis. Genetic algorithms applied to the data were also able to detect the most significant compounds. SPME is a useful technique to investigate orange juice volatile composition and a flexible chemometric approach is able to correctly separate the juices.
Meyer, Karin; Kirkpatrick, Mark
2005-01-01
Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566
Morin, R.H.
1997-01-01
Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.
Incorporating principal component analysis into air quality ...
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric – the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42−) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station–grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the prob
NASA Astrophysics Data System (ADS)
Tibaduiza, D.-A.; Torres-Arredondo, M.-A.; Mujica, L. E.; Rodellar, J.; Fritzen, C.-P.
2013-12-01
This article is concerned with the practical use of Multiway Principal Component Analysis (MPCA), Discrete Wavelet Transform (DWT), Squared Prediction Error (SPE) measures and Self-Organizing Maps (SOM) to detect and classify damages in mechanical structures. The formalism is based on a distributed piezoelectric active sensor network for the excitation and detection of structural dynamic responses. Statistical models are built using PCA when the structure is known to be healthy either directly from the dynamic responses or from wavelet coefficients at different scales representing Time-frequency information. Different damages on the tested structures are simulated by adding masses at different positions. The data from the structure in different states (damaged or not) are then projected into the different principal component models by each actuator in order to obtain the input feature vectors for a SOM from the scores and the SPE measures. An aircraft fuselage from an Airbus A320 and a multi-layered carbon fiber reinforced plastic (CFRP) plate are used as examples to test the approaches. Results are presented, compared and discussed in order to determine their potential in structural health monitoring. These results showed that all the simulated damages were detectable and the selected features proved capable of separating all damage conditions from the undamaged state for both approaches.
Empirical Orthogonal Function (EOF) Analysis of Storm-Time GPS Total Electron Content Variations
NASA Astrophysics Data System (ADS)
Thomas, E. G.; Coster, A. J.; Zhang, S.; McGranaghan, R. M.; Shepherd, S. G.; Baker, J. B.; Ruohoniemi, J. M.
2016-12-01
Large perturbations in ionospheric density are known to occur during geomagnetic storms triggered by dynamic structures in the solar wind. These ionospheric storm effects have long attracted interest due to their impact on the propagation characteristics of radio wave communications. Over the last two decades, maps of vertically-integrated total electron content (TEC) based on data collected by worldwide networks of Global Positioning System (GPS) receivers have dramatically improved our ability to monitor the spatiotemporal dynamics of prominent storm-time features such as polar cap patches and storm enhanced density (SED) plumes. In this study, we use an empirical orthogonal function (EOF) decomposition technique to identify the primary modes of spatial and temporal variability in the storm-time GPS TEC response at midlatitudes over North America during more than 100 moderate geomagnetic storms from 2001-2013. We next examine the resulting time-varying principal components and their correlation with various geophysical indices and parameters in order to derive an analytical representation. Finally, we use a truncated reconstruction of the EOF basis functions and parameterization of the principal components to produce an empirical representation of the geomagnetic storm-time response of GPS TEC for all magnetic local times local times and seasons at midlatitudes in the North American sector.
A data fusion-based drought index
NASA Astrophysics Data System (ADS)
Azmi, Mohammad; Rüdiger, Christoph; Walker, Jeffrey P.
2016-03-01
Drought and water stress monitoring plays an important role in the management of water resources, especially during periods of extreme climate conditions. Here, a data fusion-based drought index (DFDI) has been developed and analyzed for three different locations of varying land use and climate regimes in Australia. The proposed index comprehensively considers all types of drought through a selection of indices and proxies associated with each drought type. In deriving the proposed index, weekly data from three different data sources (OzFlux Network, Asia-Pacific Water Monitor, and MODIS-Terra satellite) were employed to first derive commonly used individual standardized drought indices (SDIs), which were then grouped using an advanced clustering method. Next, three different multivariate methods (principal component analysis, factor analysis, and independent component analysis) were utilized to aggregate the SDIs located within each group. For the two clusters in which the grouped SDIs best reflected the water availability and vegetation conditions, the variables were aggregated based on an averaging between the standardized first principal components of the different multivariate methods. Then, considering those two aggregated indices as well as the classifications of months (dry/wet months and active/non-active months), the proposed DFDI was developed. Finally, the symbolic regression method was used to derive mathematical equations for the proposed DFDI. The results presented here show that the proposed index has revealed new aspects in water stress monitoring which previous indices were not able to, by simultaneously considering both hydrometeorological and ecological concepts to define the real water stress of the study areas.
Analysis and Evaluation of the Characteristic Taste Components in Portobello Mushroom.
Wang, Jinbin; Li, Wen; Li, Zhengpeng; Wu, Wenhui; Tang, Xueming
2018-05-10
To identify the characteristic taste components of the common cultivated mushroom (brown; Portobello), Agaricus bisporus, taste components in the stipe and pileus of Portobello mushroom harvested at different growth stages were extracted and identified, and principal component analysis (PCA) and taste active value (TAV) were used to reveal the characteristic taste components during the each of the growth stages of Portobello mushroom. In the stipe and pileus, 20 and 14 different principal taste components were identified, respectively, and they were considered as the principal taste components of Portobello mushroom fruit bodies, which included most amino acids and 5'-nucleotides. Some taste components that were found at high levels, such as lactic acid and citric acid, were not detected as Portobello mushroom principal taste components through PCA. However, due to their high content, Portobello mushroom could be used as a source of organic acids. The PCA and TAV results revealed that 5'-GMP, glutamic acid, malic acid, alanine, proline, leucine, and aspartic acid were the characteristic taste components of Portobello mushroom fruit bodies. Portobello mushroom was also found to be rich in protein and amino acids, so it might also be useful in the formulation of nutraceuticals and functional food. The results in this article could provide a theoretical basis for understanding and regulating the characteristic flavor components synthesis process of Portobello mushroom. © 2018 Institute of Food Technologists®.
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.
2015-12-01
The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.
Imaging of polysaccharides in the tomato cell wall with Raman microspectroscopy
2014-01-01
Background The primary cell wall of fruits and vegetables is a structure mainly composed of polysaccharides (pectins, hemicelluloses, cellulose). Polysaccharides are assembled into a network and linked together. It is thought that the percentage of components and of plant cell wall has an important influence on mechanical properties of fruits and vegetables. Results In this study the Raman microspectroscopy technique was introduced to the visualization of the distribution of polysaccharides in cell wall of fruit. The methodology of the sample preparation, the measurement using Raman microscope and multivariate image analysis are discussed. Single band imaging (for preliminary analysis) and multivariate image analysis methods (principal component analysis and multivariate curve resolution) were used for the identification and localization of the components in the primary cell wall. Conclusions Raman microspectroscopy supported by multivariate image analysis methods is useful in distinguishing cellulose and pectins in the cell wall in tomatoes. It presents how the localization of biopolymers was possible with minimally prepared samples. PMID:24917885
Mat-Desa, Wan N S; Ismail, Dzulkiflee; NicDaeid, Niamh
2011-10-15
Three different medium petroleum distillate (MPD) products (white spirit, paint brush cleaner, and lamp oil) were purchased from commercial stores in Glasgow, Scotland. Samples of 10, 25, 50, 75, 90, and 95% evaporated product were prepared, resulting in 56 samples in total which were analyzed using gas chromatography-mass spectrometry. Data sets from the chromatographic patterns were examined and preprocessed for unsupervised multivariate analyses using principal component analysis (PCA), hierarchical cluster analysis (HCA), and a self organizing feature map (SOFM) artificial neural network. It was revealed that data sets comprised of higher boiling point hydrocarbon compounds provided a good means for the classification of the samples and successfully linked highly weathered samples back to their unevaporated counterpart in every case. The classification abilities of SOFM were further tested and validated for their predictive abilities where one set of weather data in each case was withdrawn from the sample set and used as a test set of the retrained network. This revealed SOFM to be an outstanding mechanism for sample discrimination and linkage over the more conventional PCA and HCA methods often suggested for such data analysis. SOFM also has the advantage of providing additional information through the evaluation of component planes facilitating the investigation of underlying variables that account for the classification. © 2011 American Chemical Society
Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks
Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.
2014-01-01
Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877
[Laser induced fluorescence spectrum characteristics of common edible oil and fried cooking oil].
Mu, Tao-tao; Chen, Si-ying; Zhang, Yin-chao; Chen, He; Guo, Pan; Ge, Xian-ying; Gao, Li-lei
2013-09-01
In order to detect the trench oil the authors built a trench oil rapid detection system based on laser induced fluorescence detection technology. This system used 355 nm laser as excitation light source. The authors collected the fluorescence spectrum of a variety of edible oil and fried cooking oil (a kind of trench oil) and then set up a fluorescence spectrum database by taking advantage of the trench oil detection system It was found that the fluorescence characteristics of fried cooking oil and common edible oil were obviously different. Then it could easily realize the oil recognition and trench oil rapid detection by using principal component analysis and BP neural network, and the overall recognition rate could reach as high as 97.5%. Experiments showed that laser induced fluorescence spectrum technology was fast, non-contact, and highly sensitive. Combined with BP neural network, it would become a new technique to detect the trench oil.
Liu, Qian-qian; Wang, Chun-yan; Shi, Xiao-feng; Li, Wen-dong; Luan, Xiao-ning; Hou, Shi-lin; Zhang, Jin-liang; Zheng, Rong-er
2012-04-01
In this paper, a new method was developed to differentiate the spill oil samples. The synchronous fluorescence spectra in the lower nonlinear concentration range of 10(-2) - 10(-1) g x L(-1) were collected to get training data base. Radial basis function artificial neural network (RBF-ANN) was used to identify the samples sets, along with principal component analysis (PCA) as the feature extraction method. The recognition rate of the closely-related oil source samples is 92%. All the results demonstrated that the proposed method could identify the crude oil samples effectively by just one synchronous spectrum of the spill oil sample. The method was supposed to be very suitable to the real-time spill oil identification, and can also be easily applied to the oil logging and the analysis of other multi-PAHs or multi-fluorescent mixtures.
Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System
NASA Technical Reports Server (NTRS)
Lin, Tsung Han (Hank)
2011-01-01
JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.
Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing
2018-02-01
Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.
Functional-anatomic correlates of individual differences in memory.
Kirchhoff, Brenda A; Buckner, Randy L
2006-07-20
Memory abilities differ greatly across individuals. To explore a source of these differences, we characterized the varied strategies people adopt during unconstrained encoding. Participants intentionally encoded object pairs during functional MRI. Principal components analysis applied to a strategy questionnaire revealed that participants variably used four main strategies to aid learning. Individuals' use of verbal elaboration and visual inspection strategies independently correlated with their memory performance. Verbal elaboration correlated with activity in a network of regions that included prefrontal regions associated with controlled verbal processing, while visual inspection correlated with activity in a network of regions that included an extrastriate region associated with object processing. Activity in regions associated with use of these strategies was also correlated with memory performance. This study reveals functional-anatomic correlates of verbal and perceptual strategies that are variably used by individuals during encoding. These strategies engage distinct brain regions and may separately influence memory performance.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Hooghe, Marc
2011-06-01
In order to assess the determinants of homophobia among Belgian adolescents, a shortened version of the Homophobia scale (Wright et al., 1999) was included in a representative survey among Belgian adolescents (n = 4,870). Principal component analysis demonstrated that the scale was one-dimensional and internally coherent. The results showed that homophobia is still widespread among Belgian adolescents, despite various legal reforms in the country aiming to combat discrimination of gay women and men. A multivariate regression analysis demonstrated that boys, ethnic minorities, individuals with high levels of ethnocentrism and an instrumental worldview, Muslim minorities, and those with low levels of associational involvement scored significantly higher on the scale. While among boys an extensive friendship network was associated with higher levels of homophobia, the opposite phenomenon was found among girls. We discuss the possible relation between notions of masculinity within predominantly male adolescent friendship networks and social support for homophobia.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
ERIC Educational Resources Information Center
Mugrage, Beverly; And Others
Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…
A Note on McDonald's Generalization of Principal Components Analysis
ERIC Educational Resources Information Center
Shine, Lester C., II
1972-01-01
It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…
Peterson, Leif E
2002-01-01
CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816
Door detection in images based on learning by components
NASA Astrophysics Data System (ADS)
Cicirelli, Grazia; D'Orazio, Tiziana; Ancona, Nicola
2001-10-01
In this paper we present a vision-based technique for detecting targets of the environment which has to be reached by an autonomous mobile robot during its navigational task. The targets the robot has to reach are the doors of our office building. Color and shape information are used as identifying features for detecting principal components of the door. In fact in images the door can appear of different dimensions depending on the attitude of the robot with respect to the door, therefore detection of the door is performed by detecting its most significant components in the image. Positive and negative examples, in form of image patterns, are manually selected from real images for training two neural classifiers in order to recognize the single components. Each classifier has been realized by a feed-forward neural network with one hidden layer and sigmoid activation function. Moreover for selecting negative examples, relevant for the problem at hand, a bootstrap technique has been used during the training process. Finally the detecting system has been applied to several test real images for evaluating its performance.
The Complexity of Human Walking: A Knee Osteoarthritis Study
Kotti, Margarita; Duffell, Lynsey D.; Faisal, Aldo A.; McGregor, Alison H.
2014-01-01
This study proposes a framework for deconstructing complex walking patterns to create a simple principal component space before checking whether the projection to this space is suitable for identifying changes from the normality. We focus on knee osteoarthritis, the most common knee joint disease and the second leading cause of disability. Knee osteoarthritis affects over 250 million people worldwide. The motivation for projecting the highly dimensional movements to a lower dimensional and simpler space is our belief that motor behaviour can be understood by identifying a simplicity via projection to a low principal component space, which may reflect upon the underlying mechanism. To study this, we recruited 180 subjects, 47 of which reported that they had knee osteoarthritis. They were asked to walk several times along a walkway equipped with two force plates that capture their ground reaction forces along 3 axes, namely vertical, anterior-posterior, and medio-lateral, at 1000 Hz. Data when the subject does not clearly strike the force plate were excluded, leaving 1–3 gait cycles per subject. To examine the complexity of human walking, we applied dimensionality reduction via Probabilistic Principal Component Analysis. The first principal component explains 34% of the variance in the data, whereas over 80% of the variance is explained by 8 principal components or more. This proves the complexity of the underlying structure of the ground reaction forces. To examine if our musculoskeletal system generates movements that are distinguishable between normal and pathological subjects in a low dimensional principal component space, we applied a Bayes classifier. For the tested cross-validated, subject-independent experimental protocol, the classification accuracy equals 82.62%. Also, a novel complexity measure is proposed, which can be used as an objective index to facilitate clinical decision making. This measure proves that knee osteoarthritis subjects exhibit more variability in the two-dimensional principal component space. PMID:25232949
An ECG signals compression method and its validation using NNs.
Fira, Catalina Monica; Goras, Liviu
2008-04-01
This paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and Lempel-Ziv-Welch (LZW) coding. The algorithm has been verified using eight of the most frequent normal and pathological types of cardiac beats and an multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated as well. A new compression measure called "quality score," which takes into account both the reconstruction errors and the compression ratio, is proposed.
Principal Components Analysis of a JWST NIRSpec Detector Subsystem
NASA Technical Reports Server (NTRS)
Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting;
2013-01-01
We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Chang, Dong W; Hayashi, Shinichi; Gharib, Sina A; Vaisar, Tomas; King, S Trevor; Tsuchiya, Mitsuhiro; Ruzinski, John T; Park, David R; Matute-Bello, Gustavo; Wurfel, Mark M; Bumgarner, Roger; Heinecke, Jay W; Martin, Thomas R
2008-10-01
Acute lung injury causes complex changes in protein expression in the lungs. Whereas most prior studies focused on single proteins, newer methods allowing the simultaneous study of many proteins could lead to a better understanding of pathogenesis and new targets for treatment. The purpose of this study was to examine the changes in protein expression in the bronchoalveolar lavage fluid (BALF) of patients during the course of the acute respiratory distress syndrome (ARDS). Using two-dimensional difference gel electrophoresis (DIGE), the expression of proteins in the BALF from patients on Days 1 (n = 7), 3 (n = 8), and 7 (n = 5) of ARDS were compared with findings in normal volunteers (n = 9). The patterns of protein expression were analyzed using principal component analysis (PCA). Biological processes that were enriched in the BALF proteins of patients with ARDS were identified using Gene Ontology (GO) analysis. Protein networks that model the protein interactions in the BALF were generated using Ingenuity Pathway Analysis. An average of 991 protein spots were detected using DIGE. Of these, 80 protein spots, representing 37 unique proteins in all of the fluids, were identified using mass spectrometry. PCA confirmed important differences between the proteins in the ARDS and normal samples. GO analysis showed that these differences are due to the enrichment of proteins involved in inflammation, infection, and injury. The protein network analysis showed that the protein interactions in ARDS are complex and redundant, and revealed unexpected central components in the protein networks. Proteomics and protein network analysis reveals the complex nature of lung protein interactions in ARDS. The results provide new insights about protein networks in injured lungs, and identify novel mediators that are likely to be involved in the pathogenesis and progression of acute lung injury.
A model for the electronic support of practice-based research networks.
Peterson, Kevin A; Delaney, Brendan C; Arvanitis, Theodoros N; Taweel, Adel; Sandberg, Elisabeth A; Speedie, Stuart; Richard Hobbs, F D
2012-01-01
The principal goal of the electronic Primary Care Research Network (ePCRN) is to enable the development of an electronic infrastructure to support clinical research activities in primary care practice-based research networks (PBRNs). We describe the model that the ePCRN developed to enhance the growth and to expand the reach of PBRN research. Use cases and activity diagrams were developed from interviews with key informants from 11 PBRNs from the United States and United Kingdom. Discrete functions were identified and aggregated into logical components. Interaction diagrams were created, and an overall composite diagram was constructed describing the proposed software behavior. Software for each component was written and aggregated, and the resulting prototype application was pilot tested for feasibility. A practical model was then created by separating application activities into distinct software packages based on existing PBRN business rules, hardware requirements, network requirements, and security concerns. We present an information architecture that provides for essential interactions, activities, data flows, and structural elements necessary for providing support for PBRN translational research activities. The model describes research information exchange between investigators and clusters of independent data sites supported by a contracted research director. The model was designed to support recruitment for clinical trials, collection of aggregated anonymous data, and retrieval of identifiable data from previously consented patients across hundreds of practices. The proposed model advances our understanding of the fundamental roles and activities of PBRNs and defines the information exchange commonly used by PBRNs to successfully engage community health care clinicians in translational research activities. By describing the network architecture in a language familiar to that used by software developers, the model provides an important foundation for the development of electronic support for essential PBRN research activities.
Deficient GABAergic gliotransmission may cause broader sensory tuning in schizophrenia.
Hoshino, Osamu
2013-12-01
We examined how the depression of intracortical inhibition due to a reduction in ambient GABA concentration impairs perceptual information processing in schizophrenia. A neural network model with a gliotransmission-mediated ambient GABA regulatory mechanism was simulated. In the network, interneuron-to-glial-cell and principal-cell-to-glial-cell synaptic contacts were made. The former hyperpolarized glial cells and let their transporters import (remove) GABA from the extracellular space, thereby lowering ambient GABA concentration, reducing extrasynaptic GABAa receptor-mediated tonic inhibitory current, and thus exciting principal cells. In contrast, the latter depolarized the glial cells and let the transporters export GABA into the extracellular space, thereby elevating the ambient GABA concentration and thus inhibiting the principal cells. A reduction in ambient GABA concentration was assumed for a schizophrenia network. Multiple dynamic cell assemblies were organized as sensory feature columns. Each cell assembly responded to one specific feature stimulus. The tuning performance of the network to an applied feature stimulus was evaluated in relation to the level of ambient GABA. Transporter-deficient glial cells caused a deficit in GABAergic gliotransmission and reduced ambient GABA concentration, which markedly deteriorated the tuning performance of the network, broadening the sensory tuning. Interestingly, the GABAergic gliotransmission mechanism could regulate local ambient GABA levels: it augmented ambient GABA around stimulus-irrelevant principal cells, while reducing ambient GABA around stimulus-relevant principal cells, thereby ensuring their selective responsiveness to the applied stimulus. We suggest that a deficit in GABAergic gliotransmission may cause a reduction in ambient GABA concentration, leading to a broadening of sensory tuning in schizophrenia. The GABAergic gliotransmission mechanism proposed here may have an important role in the regulation of local ambient GABA levels, thereby improving the sensory tuning performance of the cortex.
Identification of vegetable diseases using neural network
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Tang, Jianjun; Li, Yao
2007-02-01
Vegetables are widely planted all over China, but they often suffer from the some diseases. A method of major technical and economical importance is introduced in this paper, which explores the feasibility of implementing fast and reliable automatic identification of vegetable diseases and their infection grades from color and morphological features of leaves. Firstly, leaves are plucked from clustered plant and pictures of the leaves are taken with a CCD digital color camera. Secondly, color and morphological characteristics are obtained by standard image processing techniques, for examples, Otsu thresholding method segments the region of interest, image opening following closing algorithm removes noise, Principal Components Analysis reduces the dimension of the original features. Then, a recently proposed boosting algorithm AdaBoost. M2 is applied to RBF networks for diseases classification based on the above features, where the kernel function of RBF networks is Gaussian form with argument taking Euclidean distance of the input vector from a center. Our experiment performs on the database collected by Chinese Academy of Agricultural Sciences, and result shows that Boosting RBF Networks classifies the 230 cucumber leaves into 2 different diseases (downy-mildew and angular-leaf-spot), and identifies the infection grades of each disease according to the infection degrees.
Karamzadeh, Razieh; Karimi-Jafari, Mohammad Hossein; Sharifi-Zarchi, Ali; Chitsaz, Hamidreza; Salekdeh, Ghasem Hosseini; Moosavi-Movahedi, Ali Akbar
2017-06-16
The human protein disulfide isomerase (hPDI), is an essential four-domain multifunctional enzyme. As a result of disulfide shuffling in its terminal domains, hPDI exists in two oxidation states with different conformational preferences which are important for substrate binding and functional activities. Here, we address the redox-dependent conformational dynamics of hPDI through molecular dynamics (MD) simulations. Collective domain motions are identified by the principal component analysis of MD trajectories and redox-dependent opening-closing structure variations are highlighted on projected free energy landscapes. Then, important structural features that exhibit considerable differences in dynamics of redox states are extracted by statistical machine learning methods. Mapping the structural variations to time series of residue interaction networks also provides a holistic representation of the dynamical redox differences. With emphasizing on persistent long-lasting interactions, an approach is proposed that compiled these time series networks to a single dynamic residue interaction network (DRIN). Differential comparison of DRIN in oxidized and reduced states reveals chains of residue interactions that represent potential allosteric paths between catalytic and ligand binding sites of hPDI.
Fan, Wufeng; Zhou, Yuhan; Li, Hao
2017-01-01
In our study, we aimed to extract dysregulated pathways in human monocytes infected by Listeria monocytogenes (LM) based on pathway interaction network (PIN) which presented the functional dependency between pathways. After genes were aligned to the pathways, principal component analysis (PCA) was used to calculate the pathway activity for each pathway, followed by detecting seed pathway. A PIN was constructed based on gene expression profile, protein-protein interactions (PPIs), and cellular pathways. Identifying dysregulated pathways from the PIN was performed relying on seed pathway and classification accuracy. To evaluate whether the PIN method was feasible or not, we compared the introduced method with standard network centrality measures. The pathway of RNA polymerase II pretranscription events was selected as the seed pathway. Taking this seed pathway as start, one pathway set (9 dysregulated pathways) with AUC score of 1.00 was identified. Among the 5 hub pathways obtained using standard network centrality measures, 4 pathways were the common ones between the two methods. RNA polymerase II transcription and DNA replication owned a higher number of pathway genes and DEGs. These dysregulated pathways work together to influence the progression of LM infection, and they will be available as biomarkers to diagnose LM infection.
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2017-06-01
Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.
Pepper seed variety identification based on visible/near-infrared spectral technology
NASA Astrophysics Data System (ADS)
Li, Cuiling; Wang, Xiu; Meng, Zhijun; Fan, Pengfei; Cai, Jichen
2016-11-01
Pepper is a kind of important fruit vegetable, with the expansion of pepper hybrid planting area, detection of pepper seed purity is especially important. This research used visible/near infrared (VIS/NIR) spectral technology to detect the variety of single pepper seed, and chose hybrid pepper seeds "Zhuo Jiao NO.3", "Zhuo Jiao NO.4" and "Zhuo Jiao NO.5" as research sample. VIS/NIR spectral data of 80 "Zhuo Jiao NO.3", 80 "Zhuo Jiao NO.4" and 80 "Zhuo Jiao NO.5" pepper seeds were collected, and the original spectral data was pretreated with standard normal variable (SNV) transform, first derivative (FD), and Savitzky-Golay (SG) convolution smoothing methods. Principal component analysis (PCA) method was adopted to reduce the dimension of the spectral data and extract principal components, according to the distribution of the first principal component (PC1) along with the second principal component(PC2) in the twodimensional plane, similarly, the distribution of PC1 coupled with the third principal component(PC3), and the distribution of PC2 combined with PC3, distribution areas of three varieties of pepper seeds were divided in each twodimensional plane, and the discriminant accuracy of PCA was tested through observing the distribution area of samples' principal components in validation set. This study combined PCA and linear discriminant analysis (LDA) to identify single pepper seed varieties, results showed that with the FD preprocessing method, the discriminant accuracy of pepper seed varieties was 98% for validation set, it concludes that using VIS/NIR spectral technology is feasible for identification of single pepper seed varieties.
Long, J.M.; Fisher, W.L.
2006-01-01
We present a method for spatial interpretation of environmental variation in a reservoir that integrates principal components analysis (PCA) of environmental data with geographic information systems (GIS). To illustrate our method, we used data from a Great Plains reservoir (Skiatook Lake, Oklahoma) with longitudinal variation in physicochemical conditions. We measured 18 physicochemical features, mapped them using GIS, and then calculated and interpreted four principal components. Principal component 1 (PC1) was readily interpreted as longitudinal variation in water chemistry, but the other principal components (PC2-4) were difficult to interpret. Site scores for PC1-4 were calculated in GIS by summing weighted overlays of the 18 measured environmental variables, with the factor loadings from the PCA as the weights. PC1-4 were then ordered into a landscape hierarchy, an emergent property of this technique, which enabled their interpretation. PC1 was interpreted as a reservoir scale change in water chemistry, PC2 was a microhabitat variable of rip-rap substrate, PC3 identified coves/embayments and PC4 consisted of shoreline microhabitats related to slope. The use of GIS improved our ability to interpret the more obscure principal components (PC2-4), which made the spatial variability of the reservoir environment more apparent. This method is applicable to a variety of aquatic systems, can be accomplished using commercially available software programs, and allows for improved interpretation of the geographic environmental variability of a system compared to using typical PCA plots. ?? Copyright by the North American Lake Management Society 2006.
Top-down network analysis characterizes hidden termite-termite interactions.
Campbell, Colin; Russo, Laura; Marins, Alessandra; DeSouza, Og; Schönrogge, Karsten; Mortensen, David; Tooker, John; Albert, Réka; Shea, Katriona
2016-09-01
The analysis of ecological networks is generally bottom-up, where networks are established by observing interactions between individuals. Emergent network properties have been indicated to reflect the dominant mode of interactions in communities that might be mutualistic (e.g., pollination) or antagonistic (e.g., host-parasitoid communities). Many ecological communities, however, comprise species interactions that are difficult to observe directly. Here, we propose that a comparison of the emergent properties from detail-rich reference communities with known modes of interaction can inform our understanding of detail-sparse focal communities. With this top-down approach, we consider patterns of coexistence between termite species that live as guests in mounds built by other host termite species as a case in point. Termite societies are extremely sensitive to perturbations, which precludes determining the nature of their interactions through direct observations. We perform a literature review to construct two networks representing termite mound cohabitation in a Brazilian savanna and in the tropical forest of Cameroon. We contrast the properties of these cohabitation networks with a total of 197 geographically diverse mutualistic plant-pollinator and antagonistic host-parasitoid networks. We analyze network properties for the networks, perform a principal components analysis (PCA), and compute the Mahalanobis distance of the termite networks to the cloud of mutualistic and antagonistic networks to assess the extent to which the termite networks overlap with the properties of the reference networks. Both termite networks overlap more closely with the mutualistic plant-pollinator communities than the antagonistic host-parasitoid communities, although the Brazilian community overlap with mutualistic communities is stronger. The analysis raises the hypothesis that termite-termite cohabitation networks may be overall mutualistic. More broadly, this work provides support for the argument that cryptic communities may be analyzed via comparison to well-characterized communities.
Learning to Lead: The Professional Development Needs of Assistant Principals
ERIC Educational Resources Information Center
Allen, James G.; Weaver, Rosa L.
2014-01-01
The purpose of this study was to investigate the professional development needs of assistant principals in the northern Kentucky region in preparation for the launch of the Northern Kentucky Assistant Principals' Network, a unique and innovative program to support their leadership development. Using the Educational Leadership Policy Standards:…
Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J
2003-09-01
As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.
2017-01-01
Introduction This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Methods Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child’s transition, child involvement in transition, child autonomy, school ethos, professionals’ involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Results Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), ‘child inclusive ethos,’ contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43–7.18, p<0.0001). Discussion To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning. PMID:28636649
Ravenscroft, John; Wazny, Kerri; Davis, John M
2017-01-01
This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child's transition, child involvement in transition, child autonomy, school ethos, professionals' involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), 'child inclusive ethos,' contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43-7.18, p<0.0001). To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning.
Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch
2014-03-01
Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.
Principal components of wrist circumduction from electromagnetic surgical tracking.
Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E
2017-02-01
An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Introduction to uses and interpretation of principal component analyses in forest biology.
J. G. Isebrands; Thomas R. Crow
1975-01-01
The application of principal component analysis for interpretation of multivariate data sets is reviewed with emphasis on (1) reduction of the number of variables, (2) ordination of variables, and (3) applications in conjunction with multiple regression.
Principal component analysis of phenolic acid spectra
USDA-ARS?s Scientific Manuscript database
Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
NASA Astrophysics Data System (ADS)
Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen
2017-09-01
Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.
NASA Astrophysics Data System (ADS)
Ueki, Kenta; Iwamori, Hikaru
2017-10-01
In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.
Baresic, Mario; Salatino, Silvia; Kupr, Barbara
2014-01-01
Skeletal muscle tissue shows an extraordinary cellular plasticity, but the underlying molecular mechanisms are still poorly understood. Here, we use a combination of experimental and computational approaches to unravel the complex transcriptional network of muscle cell plasticity centered on the peroxisome proliferator-activated receptor γ coactivator 1α (PGC-1α), a regulatory nexus in endurance training adaptation. By integrating data on genome-wide binding of PGC-1α and gene expression upon PGC-1α overexpression with comprehensive computational prediction of transcription factor binding sites (TFBSs), we uncover a hitherto-underestimated number of transcription factor partners involved in mediating PGC-1α action. In particular, principal component analysis of TFBSs at PGC-1α binding regions predicts that, besides the well-known role of the estrogen-related receptor α (ERRα), the activator protein 1 complex (AP-1) plays a major role in regulating the PGC-1α-controlled gene program of the hypoxia response. Our findings thus reveal the complex transcriptional network of muscle cell plasticity controlled by PGC-1α. PMID:24912679
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Robertson, D. E.; Haines, C. L.
2009-02-01
Irrigation is important to many agricultural businesses but also has implications for catchment health. A considerable body of knowledge exists on how irrigation management affects farm business and catchment health. However, this knowledge is fragmentary; is available in many forms such as qualitative and quantitative; is dispersed in scientific literature, technical reports, and the minds of individuals; and is of varying degrees of certainty. Bayesian networks allow the integration of dispersed knowledge into quantitative systems models. This study describes the development, validation, and application of a Bayesian network model of farm irrigation in the Shepparton Irrigation Region of northern Victoria, Australia. In this first paper we describe the process used to integrate a range of sources of knowledge to develop a model of farm irrigation. We describe the principal model components and summarize the reaction to the model and its development process by local stakeholders. Subsequent papers in this series describe model validation and the application of the model to assess the regional impact of historical and future management intervention.
Winter risk estimations through infrared cameras an principal component analysis
NASA Astrophysics Data System (ADS)
Marchetti, M.; Dumoulin, J.; Ibos, L.
2012-04-01
Thermal mapping has been implemented since the late eighties to measure road pavement temperature along with some other atmospheric parameters to establish a winter risk describing the susceptibility of road network to ice occurrence. Measurements are done using a vehicle circulating on the road network in various road weather conditions. When the dew point temperature drops below road surface temperature a risk of ice occurs and therefore a loss of grip risk for circulating vehicles. To avoid too much influence of the sun, and to see the thermal behavior of the pavement enhanced, thermal mapping is usually done before dawn during winter time. That is when the energy accumulated by the road during daytime is mainly dissipated (by radiation, by conduction and by convection) and before the road structure starts a new cycle. This analysis is mainly done when a new road network is built, or when some major pavement changes are made, or when modifications in the road surroundings took place that might affect the thermal heat balance. This helps road managers to install sensors to monitor road status on specific locations identified as dangerous, or simply to install specific road signs. Measurements are anyhow time-consuming. Indeed, a whole road network can hardly be analysed at once, and has to be partitioned in stretches that could be done in the open time window to avoid temperature artefacts due to a rising sun. The LRPC Nancy has been using a thermal mapping vehicle with now two infrared cameras. Road events were collected by the operator to help the analysis of the network thermal response. A conventional radiometer with appropriate performances was used as a reference. The objective of the work was to compare results from the radiometer and the cameras. All the atmospheric parameters measured by the different sensors such as air temperature and relative humidity were used as input parameters for the infrared camera when recording thermal images. Road thermal heterogeneities were clearly identified, while usually missed by a conventional radiometer. In the case presented here, the two lanes of the road could be properly observed. Promising perspectives appeared to increase the measurement rate. Furthermore, to cope with the climatic constraints of the winter measurements as to build a dynamic winter risk, a multivariate data analysis approach was implemented. Principal component analysis was performed and enabled to set up of dynamic thermal signature with a great agreement between statistical results and field measurements.
A study of fuzzy logic ensemble system performance on face recognition problem
NASA Astrophysics Data System (ADS)
Polyakova, A.; Lipinskiy, L.
2017-02-01
Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.
Lavigne, Katie M; Woodward, Todd S
2018-04-01
Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.
A P2P Botnet detection scheme based on decision tree and adaptive multilayer neural networks.
Alauthaman, Mohammad; Aslam, Nauman; Zhang, Li; Alasem, Rafe; Hossain, M A
2018-01-01
In recent years, Botnets have been adopted as a popular method to carry and spread many malicious codes on the Internet. These malicious codes pave the way to execute many fraudulent activities including spam mail, distributed denial-of-service attacks and click fraud. While many Botnets are set up using centralized communication architecture, the peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control data making their detection even more difficult. This work presents a method of P2P Bot detection based on an adaptive multilayer feed-forward neural network in cooperation with decision trees. A classification and regression tree is applied as a feature selection technique to select relevant features. With these features, a multilayer feed-forward neural network training model is created using a resilient back-propagation learning algorithm. A comparison of feature set selection based on the decision tree, principal component analysis and the ReliefF algorithm indicated that the neural network model with features selection based on decision tree has a better identification accuracy along with lower rates of false positives. The usefulness of the proposed approach is demonstrated by conducting experiments on real network traffic datasets. In these experiments, an average detection rate of 99.08 % with false positive rate of 0.75 % was observed.
Characterization of Early Cortical Neural Network ...
We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentially absent on DIV 2 and developed rapidly between DIV 5 and 12. Spiking activity was primarily sporadic and unorganized at early DIV, and became progressively more organized with time in culture, with bursting parameters, synchrony and network bursting increasing between DIV 5 and 12. We selected 12 features to describe network activity and principal components analysis using these features demonstrated a general segregation of data by age at both the well and plate levels. Using a combination of random forest classifiers and Support Vector Machines, we demonstrated that 4 features (CV of within burst ISI, CV of IBI, network spike rate and burst rate) were sufficient to predict the age (either DIV 5, 7, 9 or 12) of each well recording with >65% accuracy. When restricting the classification problem to a binary decision, we found that classification improved dramatically, e.g. 95% accuracy for discriminating DIV 5 vs DIV 12 wells. Further, we present a novel resampling approach to determine the number of wells that might be needed for conducting comparisons of different treatments using mwMEA plates. Overall, these results demonstrate that network development on mwMEA plates is similar to
Motor network efficiency and disability in multiple sclerosis
Yaldizli, Özgür; Sethi, Varun; Muhlert, Nils; Liu, Zheng; Samson, Rebecca S.; Altmann, Daniel R.; Ron, Maria A.; Wheeler-Kingshott, Claudia A.M.; Miller, David H.; Chard, Declan T.
2015-01-01
Objective: To develop a composite MRI-based measure of motor network integrity, and determine if it explains disability better than conventional MRI measures in patients with multiple sclerosis (MS). Methods: Tract density imaging and constrained spherical deconvolution tractography were used to identify motor network connections in 22 controls. Fractional anisotropy (FA), magnetization transfer ratio (MTR), and normalized volume were computed in each tract in 71 people with relapse onset MS. Principal component analysis was used to distill the FA, MTR, and tract volume data into a single metric for each tract, which in turn was used to compute a composite measure of motor network efficiency (composite NE) using graph theory. Associations were investigated between the Expanded Disability Status Scale (EDSS) and the following MRI measures: composite motor NE, NE calculated using FA alone, FA averaged in the combined motor network tracts, brain T2 lesion volume, brain parenchymal fraction, normal-appearing white matter MTR, and cervical cord cross-sectional area. Results: In univariable analysis, composite motor NE explained 58% of the variation in EDSS in the whole MS group, more than twice that of the other MRI measures investigated. In a multivariable regression model, only composite NE and disease duration were independently associated with EDSS. Conclusions: A composite MRI measure of motor NE was able to predict disability substantially better than conventional non-network-based MRI measures. PMID:26320199
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223
ERIC Educational Resources Information Center
Kronenberger, William G.; Thompson, Robert J., Jr.; Morrow, Catherine
1997-01-01
A principal components analysis of the Family Environment Scale (FES) (R. Moos and B. Moos, 1994) was performed using 113 undergraduates. Research supported 3 broad components encompassing the 10 FES subscales. These results supported previous research and the generalization of the FES to college samples. (SLD)
Time series analysis of collective motions in proteins
NASA Astrophysics Data System (ADS)
Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.
2004-01-01
The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.
Selection Shapes Transcriptional Logic and Regulatory Specialization in Genetic Networks
Fogelmark, Karl; Peterson, Carsten; Troein, Carl
2016-01-01
Background Living organisms need to regulate their gene expression in response to environmental signals and internal cues. This is a computational task where genes act as logic gates that connect to form transcriptional networks, which are shaped at all scales by evolution. Large-scale mutations such as gene duplications and deletions add and remove network components, whereas smaller mutations alter the connections between them. Selection determines what mutations are accepted, but its importance for shaping the resulting networks has been debated. Methodology To investigate the effects of selection in the shaping of transcriptional networks, we derive transcriptional logic from a combinatorially powerful yet tractable model of the binding between DNA and transcription factors. By evolving the resulting networks based on their ability to function as either a simple decision system or a circadian clock, we obtain information on the regulation and logic rules encoded in functional transcriptional networks. Comparisons are made between networks evolved for different functions, as well as with structurally equivalent but non-functional (neutrally evolved) networks, and predictions are validated against the transcriptional network of E. coli. Principal Findings We find that the logic rules governing gene expression depend on the function performed by the network. Unlike the decision systems, the circadian clocks show strong cooperative binding and negative regulation, which achieves tight temporal control of gene expression. Furthermore, we find that transcription factors act preferentially as either activators or repressors, both when binding multiple sites for a single target gene and globally in the transcriptional networks. This separation into positive and negative regulators requires gene duplications, which highlights the interplay between mutation and selection in shaping the transcriptional networks. PMID:26927540
ERIC Educational Resources Information Center
Severson, John R.
2013-01-01
For this qualitative study, I explored and described how superintendents and principals interpreted and experienced a sustained professional development process focusing on instruction and student learning, a form of Elmore's Superintendents in the Classroom (SITC) Network. Specifically, I examined how the addition of principals in the SITC…
EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES
An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.
Nguyen, Phuong H
2006-12-01
Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.
Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica
2016-04-19
The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.
NASA Astrophysics Data System (ADS)
Li, Jiangtong; Luo, Yongdao; Dai, Honglin
2018-01-01
Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.
Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar
2017-06-01
We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Use of multivariate statistics to identify unreliable data obtained using CASA.
Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón
2013-06-01
In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.
Liu, Xiang; Guo, Ling-Peng; Zhang, Fei-Yun; Ma, Jie; Mu, Shu-Yong; Zhao, Xin; Li, Lan-Hai
2015-02-01
Eight physical and chemical indicators related to water quality were monitored from nineteen sampling sites along the Kunes River at the end of snowmelt season in spring. To investigate the spatial distribution characteristics of water physical and chemical properties, cluster analysis (CA), discriminant analysis (DA) and principal component analysis (PCA) are employed. The result of cluster analysis showed that the Kunes River could be divided into three reaches according to the similarities of water physical and chemical properties among sampling sites, representing the upstream, midstream and downstream of the river, respectively; The result of discriminant analysis demonstrated that the reliability of such a classification was high, and DO, Cl- and BOD5 were the significant indexes leading to this classification; Three principal components were extracted on the basis of the principal component analysis, in which accumulative variance contribution could reach 86.90%. The result of principal component analysis also indicated that water physical and chemical properties were mostly affected by EC, ORP, NO3(-) -N, NH4(+) -N, Cl- and BOD5. The sorted results of principal component scores in each sampling sites showed that the water quality was mainly influenced by DO in upstream, by pH in midstream, and by the rest of indicators in downstream. The order of comprehensive scores for principal components revealed that the water quality degraded from the upstream to downstream, i.e., the upstream had the best water quality, followed by the midstream, while the water quality at downstream was the worst. This result corresponded exactly to the three reaches classified using cluster analysis. Anthropogenic activity and the accumulation of pollutants along the river were probably the main reasons leading to this spatial difference.
Putilov, Arcady A; Donskaya, Olga G
2016-01-01
Age-associated changes in different bandwidths of the human electroencephalographic (EEG) spectrum are well documented, but their functional significance is poorly understood. This spectrum seems to represent summation of simultaneous influences of several sleep-wake regulatory processes. Scoring of its orthogonal (uncorrelated) principal components can help in separation of the brain signatures of these processes. In particular, the opposite age-associated changes were documented for scores on the two largest (1st and 2nd) principal components of the sleep EEG spectrum. A decrease of the first score and an increase of the second score can reflect, respectively, the weakening of the sleep drive and disinhibition of the opposing wake drive with age. In order to support the suggestion of age-associated disinhibition of the wake drive from the antagonistic influence of the sleep drive, we analyzed principal component scores of the resting EEG spectra obtained in sleep deprivation experiments with 81 healthy young adults aged between 19 and 26 and 40 healthy older adults aged between 45 and 66 years. At the second day of the sleep deprivation experiments, frontal scores on the 1st principal component of the EEG spectrum demonstrated an age-associated reduction of response to eyes closed relaxation. Scores on the 2nd principal component were either initially increased during wakefulness or less responsive to such sleep-provoking conditions (frontal and occipital scores, respectively). These results are in line with the suggestion of disinhibition of the wake drive with age. They provide an explanation of why older adults are less vulnerable to sleep deprivation than young adults.
Distributed Framework for Dynamic Telescope and Instrument Control
NASA Astrophysics Data System (ADS)
Ames, Troy J.; Case, Lynne
2002-12-01
Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see http://www.jxta.org) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device?s IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a principal investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have been working with the Submillimetre High Angular Resolution Camera IInd Generation (SHARCII) at the CSO to investigate using IRC capabilities with the SHARC instrument.
Distributed Framework for Dynamic Telescope and Instrument Control
NASA Technical Reports Server (NTRS)
Ames, Troy J.; Case, Lynne
2002-01-01
Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see httD://www.jxta.org,) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device's IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have been working with the Submillimetre High Angular Resolution Camera IInd Generation (SHARCII) at the CSO to investigate using IRC capabilities with the SHARC instrument.
NASA Astrophysics Data System (ADS)
Wojciechowski, Adam
2017-04-01
In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.
Comparing development of synaptic proteins in rat visual, somatosensory, and frontal cortex.
Pinto, Joshua G A; Jones, David G; Murphy, Kathryn M
2013-01-01
Two theories have influenced our understanding of cortical development: the integrated network theory, where synaptic development is coordinated across areas; and the cascade theory, where the cortex develops in a wave-like manner from sensory to non-sensory areas. These different views on cortical development raise challenges for current studies aimed at comparing detailed maturation of the connectome among cortical areas. We have taken a different approach to compare synaptic development in rat visual, somatosensory, and frontal cortex by measuring expression of pre-synaptic (synapsin and synaptophysin) proteins that regulate vesicle cycling, and post-synaptic density (PSD-95 and Gephyrin) proteins that anchor excitatory or inhibitory (E-I) receptors. We also compared development of the balances between the pairs of pre- or post-synaptic proteins, and the overall pre- to post-synaptic balance, to address functional maturation and emergence of the E-I balance. We found that development of the individual proteins and the post-synaptic index overlapped among the three cortical areas, but the pre-synaptic index matured later in frontal cortex. Finally, we applied a neuroinformatics approach using principal component analysis and found that three components captured development of the synaptic proteins. The first component accounted for 64% of the variance in protein expression and reflected total protein expression, which overlapped among the three cortical areas. The second component was gephyrin and the E-I balance, it emerged as sequential waves starting in somatosensory, then frontal, and finally visual cortex. The third component was the balance between pre- and post-synaptic proteins, and this followed a different developmental trajectory in somatosensory cortex. Together, these results give the most support to an integrated network of synaptic development, but also highlight more complex patterns of development that vary in timing and end point among the cortical areas.
A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...
Rosacea assessment by erythema index and principal component analysis segmentation maps
NASA Astrophysics Data System (ADS)
Kuzmina, Ilona; Rubins, Uldis; Saknite, Inga; Spigulis, Janis
2017-12-01
RGB images of rosacea were analyzed using segmentation maps of principal component analysis (PCA) and erythema index (EI). Areas of segmented clusters were compared to Clinician's Erythema Assessment (CEA) values given by two dermatologists. The results show that visible blood vessels are segmented more precisely on maps of the erythema index and the third principal component (PC3). In many cases, a distribution of clusters on EI and PC3 maps are very similar. Mean values of clusters' areas on these maps show a decrease of the area of blood vessels and erythema and an increase of lighter skin area after the therapy for the patients with diagnosis CEA = 2 on the first visit and CEA=1 on the second visit. This study shows that EI and PC3 maps are more useful than the maps of the first (PC1) and second (PC2) principal components for indicating vascular structures and erythema on the skin of rosacea patients and therapy monitoring.
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang
2018-04-01
A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
[Content of mineral elements of Gastrodia elata by principal components analysis].
Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei
2015-03-01
To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.
Visualizing Hyolaryngeal Mechanics in Swallowing Using Dynamic MRI
Pearson, William G.; Zumwalt, Ann C.
2013-01-01
Introduction Coordinates of anatomical landmarks are captured using dynamic MRI to explore whether a proposed two-sling mechanism underlies hyolaryngeal elevation in pharyngeal swallowing. A principal components analysis (PCA) is applied to coordinates to determine the covariant function of the proposed mechanism. Methods Dynamic MRI (dMRI) data were acquired from eleven healthy subjects during a repeated swallows task. Coordinates mapping the proposed mechanism are collected from each dynamic (frame) of a dynamic MRI swallowing series of a randomly selected subject in order to demonstrate shape changes in a single subject. Coordinates representing minimum and maximum hyolaryngeal elevation of all 11 subjects were also mapped to demonstrate shape changes of the system among all subjects. MophoJ software was used to perform PCA and determine vectors of shape change (eigenvectors) for elements of the two-sling mechanism of hyolaryngeal elevation. Results For both single subject and group PCAs, hyolaryngeal elevation accounted for the first principal component of variation. For the single subject PCA, the first principal component accounted for 81.5% of the variance. For the between subjects PCA, the first principal component accounted for 58.5% of the variance. Eigenvectors and shape changes associated with this first principal component are reported. Discussion Eigenvectors indicate that two-muscle slings and associated skeletal elements function as components of a covariant mechanism to elevate the hyolaryngeal complex. Morphological analysis is useful to model shape changes in the two-sling mechanism of hyolaryngeal elevation. PMID:25090608
Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G
2012-11-13
We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular variables in the same way (functional capillary density, RBCV and RBCV(max)). Fasting plasma glucose appeared to be related to principal component 4 and did not show any association with microvascular reactivity. In non-diabetic female subjects, a multivariate scenario of associations between classic clinical variables strictly related to obesity and metabolic syndrome suggests a significant relationship between these diseases and microvascular reactivity.
The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.
Bagley, C
1980-03-01
The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.
ERIC Educational Resources Information Center
McCormick, Ernest J.; And Others
The study deals with the job component method of establishing compensation rates. The basic job analysis questionnaire used in the study was the Position Analysis Questionnaire (PAQ) (Form B). On the basis of a principal components analysis of PAQ data for a large sample (2,688) of jobs, a number of principal components (job dimensions) were…
Chen, Gengsheng; de las Fuentes, Lisa; Gu, Chi C; He, Jiang; Gu, Dongfeng; Kelly, Tanika; Hixson, James; Jacquish, Cashell; Rao, D C; Rice, Treva K
2015-06-20
Hypertension is a complex trait that often co-occurs with other conditions such as obesity and is affected by genetic and environmental factors. Aggregate indices such as principal components among these variables and their responses to environmental interventions may represent novel information that is potentially useful for genetic studies. In this study of families participating in the Genetic Epidemiology Network of Salt Sensitivity (GenSalt) Study, blood pressure (BP) responses to dietary sodium interventions are explored. Independent component analysis (ICA) was applied to 20 variables indexing obesity and BP measured at baseline and during low sodium, high sodium and high sodium plus potassium dietary intervention periods. A "heat map" protocol that classifies subjects based on risk for hypertension is used to interpret the extracted components. ICA and heat map suggest four components best describe the data: (1) systolic hypertension, (2) general hypertension, (3) response to sodium intervention and (4) obesity. The largest heritabilities are for the systolic (64%) and general hypertension (56%) components. There is a pattern of higher heritability for the component response to intervention (40-42%) as compared to those for the traditional intervention responses computed as delta scores (24%-40%). In summary, the present study provides intermediate phenotypes that are heritable. Using these derived components may prove useful in gene discovery applications.
ERIC Educational Resources Information Center
Faginski-Stark, Erica; Casavant, Christopher; Collins, William; McCandless, Jason; Tencza, Marilyn
2012-01-01
Recent federal and state mandates have tasked school systems to move beyond principal evaluation as a bureaucratic function and to re-imagine it as a critical component to improve principal performance and compel school renewal. This qualitative study investigated the district leaders' and principals' perceptions of the performance evaluation…
Cholinergic and perfusion brain networks in Parkinson disease dementia.
Colloby, Sean J; McKeith, Ian G; Burn, David J; Wyper, David J; O'Brien, John T; Taylor, John-Paul
2016-07-12
To investigate muscarinic M1/M4 cholinergic networks in Parkinson disease dementia (PDD) and their association with changes in Mini-Mental State Examination (MMSE) after 12 weeks of treatment with donepezil. Forty-nine participants (25 PDD and 24 elderly controls) underwent (123)I-QNB and (99m)Tc-exametazime SPECT scanning. We implemented voxel principal components (PC) analysis, producing a series of PC images of patterns of interrelated voxels across individuals. Linear regression analyses derived specific M1/M4 and perfusion spatial covariance patterns (SCPs). We found an M1/M4 SCP of relative decreased binding in basal forebrain, temporal, striatum, insula, and anterior cingulate (F1,47 = 31.9, p < 0.001) in cholinesterase inhibitor-naive patients with PDD, implicating limbic-paralimbic and salience cholinergic networks. The corresponding regional cerebral blood flow SCP showed relative decreased uptake in temporoparietal and prefrontal areas (F1,47 = 177.5, p < 0.001) and nodes of the frontoparietal and default mode networks (DMN). The M1/M4 pattern that correlated with an improvement in MMSE (r = 0.58, p = 0.005) revealed relatively preserved/increased pre/medial/orbitofrontal, parietal, and posterior cingulate areas coinciding with the DMN and frontoparietal networks. Dysfunctional limbic-paralimbic and salience cholinergic networks were associated with PDD. Established cholinergic maintenance of the DMN and frontoparietal networks may be prerequisite for cognitive remediation following cholinergic treatment in this condition. © 2016 American Academy of Neurology.
Cholinergic and perfusion brain networks in Parkinson disease dementia
McKeith, Ian G.; Burn, David J.; Wyper, David J.; O'Brien, John T.; Taylor, John-Paul
2016-01-01
Objective: To investigate muscarinic M1/M4 cholinergic networks in Parkinson disease dementia (PDD) and their association with changes in Mini-Mental State Examination (MMSE) after 12 weeks of treatment with donepezil. Methods: Forty-nine participants (25 PDD and 24 elderly controls) underwent 123I-QNB and 99mTc-exametazime SPECT scanning. We implemented voxel principal components (PC) analysis, producing a series of PC images of patterns of interrelated voxels across individuals. Linear regression analyses derived specific M1/M4 and perfusion spatial covariance patterns (SCPs). Results: We found an M1/M4 SCP of relative decreased binding in basal forebrain, temporal, striatum, insula, and anterior cingulate (F1,47 = 31.9, p < 0.001) in cholinesterase inhibitor–naive patients with PDD, implicating limbic-paralimbic and salience cholinergic networks. The corresponding regional cerebral blood flow SCP showed relative decreased uptake in temporoparietal and prefrontal areas (F1,47 = 177.5, p < 0.001) and nodes of the frontoparietal and default mode networks (DMN). The M1/M4 pattern that correlated with an improvement in MMSE (r = 0.58, p = 0.005) revealed relatively preserved/increased pre/medial/orbitofrontal, parietal, and posterior cingulate areas coinciding with the DMN and frontoparietal networks. Conclusion: Dysfunctional limbic-paralimbic and salience cholinergic networks were associated with PDD. Established cholinergic maintenance of the DMN and frontoparietal networks may be prerequisite for cognitive remediation following cholinergic treatment in this condition. PMID:27306636
Bridging the gap between motor imagery and motor execution with a brain-robot interface.
Bauer, Robert; Fels, Meike; Vukelić, Mathias; Ziemann, Ulf; Gharabaghi, Alireza
2015-03-01
According to electrophysiological studies motor imagery and motor execution are associated with perturbations of brain oscillations over spatially similar cortical areas. By contrast, neuroimaging and lesion studies suggest that at least partially distinct cortical networks are involved in motor imagery and execution. We sought to further disentangle this relationship by studying the role of brain-robot interfaces in the context of motor imagery and motor execution networks. Twenty right-handed subjects performed several behavioral tasks as indicators for imagery and execution of movements of the left hand, i.e. kinesthetic imagery, visual imagery, visuomotor integration and tonic contraction. In addition, subjects performed motor imagery supported by haptic/proprioceptive feedback from a brain-robot-interface. Principal component analysis was applied to assess the relationship of these indicators. The respective cortical resting state networks in the α-range were investigated by electroencephalography using the phase slope index. We detected two distinct abilities and cortical networks underlying motor control: a motor imagery network connecting the left parietal and motor areas with the right prefrontal cortex and a motor execution network characterized by transmission from the left to right motor areas. We found that a brain-robot-interface might offer a way to bridge the gap between these networks, opening thereby a backdoor to the motor execution system. This knowledge might promote patient screening and may lead to novel treatment strategies, e.g. for the rehabilitation of hemiparesis after stroke. Copyright © 2014 Elsevier Inc. All rights reserved.
2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.
Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen
2017-09-19
A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.
NASA Astrophysics Data System (ADS)
Morley, M. G.; Mihaly, S. F.; Dewey, R. K.; Jeffries, M. A.
2015-12-01
Ocean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect data on physical, chemical, biological, and geological ocean conditions over multi-year time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring quality assurance and quality control (QAQC) procedures to be developed for all data types. While some data types have relatively straightforward QAQC tests, such as scalar data range limits that are based on expected observed values or measurement limits of the instrument, for other data types the QAQC tests are more comprehensive. Long time series of ocean currents from Acoustic Doppler Current Profilers (ADCP), stitched together from multiple deployments over many years is one such data type where systematic data biases are more difficult to identify and correct. Data specialists at ONC are working to quantify systematic compass heading uncertainty in long-term ADCP records at each of the major study sites using the internal compass, remotely operated vehicle bearings, and more analytical tools such as principal component analysis (PCA) to estimate the optimal instrument alignments. In addition to using PCA, some work has been done to estimate the main components of the current at each site using tidal harmonic analysis. This paper describes the key challenges and presents preliminary PCA and tidal analysis approaches used by ONC to improve long-term observatory current measurements.
Cortical subnetwork dynamics during human language tasks.
Collard, Maxwell J; Fifer, Matthew S; Benz, Heather L; McMullen, David P; Wang, Yujing; Milsap, Griffin W; Korzeniewska, Anna; Crone, Nathan E
2016-07-15
Language tasks require the coordinated activation of multiple subnetworks-groups of related cortical interactions involved in specific components of task processing. Although electrocorticography (ECoG) has sufficient temporal and spatial resolution to capture the dynamics of event-related interactions between cortical sites, it is difficult to decompose these complex spatiotemporal patterns into functionally discrete subnetworks without explicit knowledge of each subnetwork's timing. We hypothesized that subnetworks corresponding to distinct components of task-related processing could be identified as groups of interactions with co-varying strengths. In this study, five subjects implanted with ECoG grids over language areas performed word repetition and picture naming. We estimated the interaction strength between each pair of electrodes during each task using a time-varying dynamic Bayesian network (tvDBN) model constructed from the power of high gamma (70-110Hz) activity, a surrogate for population firing rates. We then reduced the dimensionality of this model using principal component analysis (PCA) to identify groups of interactions with co-varying strengths, which we term functional network components (FNCs). This data-driven technique estimates both the weight of each interaction's contribution to a particular subnetwork, and the temporal profile of each subnetwork's activation during the task. We found FNCs with temporal and anatomical features consistent with articulatory preparation in both tasks, and with auditory and visual processing in the word repetition and picture naming tasks, respectively. These FNCs were highly consistent between subjects with similar electrode placement, and were robust enough to be characterized in single trials. Furthermore, the interaction patterns uncovered by FNC analysis correlated well with recent literature suggesting important functional-anatomical distinctions between processing external and self-produced speech. Our results demonstrate that subnetwork decomposition of event-related cortical interactions is a powerful paradigm for interpreting the rich dynamics of large-scale, distributed cortical networks during human cognitive tasks. Copyright © 2016 Elsevier Inc. All rights reserved.
High-Need Schools in Australia: The Leadership of Two Principals
ERIC Educational Resources Information Center
Gurr, David; Drysdale, Lawrie; Clarke, Simon; Wildy, Helen
2014-01-01
In this article, we report on our initial work with the International School Leadership Development Network. In doing so, we present two cases of principals leading high-need schools, and conclude with some key observations in relation to what is distinctive about leading these schools. The first case features a principal leading a suburban school…
Ahead of the Digital Learning Curve
ERIC Educational Resources Information Center
Cook, Glenn
2013-01-01
Dwight Carter admits he was a novice at social networking when he was introduced to the 140-character world of Twitter in 2010. Now, three years later, the principal of Ohio's Gahanna Lincoln High School and one of three winners of the 2013 National Association of Secondary School Principals (NASSP) Digital Principal Award does not know what he…
Effect of noise in principal component analysis with an application to ozone pollution
NASA Astrophysics Data System (ADS)
Tsakiri, Katerina G.
This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction
Calculating a checksum with inactive networking components in a computing system
Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T
2014-12-16
Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.
Calculating a checksum with inactive networking components in a computing system
Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T
2015-01-27
Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.
Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.
Siettos, Constantinos; Starke, Jens
2016-09-01
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Electronic nose for the identification of pig feeding and ripening time in Iberian hams.
Santos, J P; García, M; Aleixandre, M; Horrillo, M C; Gutiérrez, J; Sayago, I; Fernández, M J; Arés, L
2004-03-01
An electronic nose system to control the processing of dry-cured Iberian ham is presented. The sensors involved are tin oxide semiconductors thin films. They were prepared by RF sputtering. Some of the sensors were doped with metal catalysts as Pt and Pd, in order to improve the selectivity of the sensors. The multisensor with 16 semiconductor sensors, gave different responses from two types of dry-cured Iberian hams which differ in the feeding and curing time. The data has been analysed using the PCA (principal component analysis) and backpropagation and probabilistic neural networks. The analysis shows that different types of Iberian ham can be discriminated and identified successfully.
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
Desdouits, Nathan; Nilges, Michael; Blondel, Arnaud
2015-02-01
Protein conformation has been recognized as the key feature determining biological function, as it determines the position of the essential groups specifically interacting with substrates. Hence, the shape of the cavities or grooves at the protein surface appears to drive those functions. However, only a few studies describe the geometrical evolution of protein cavities during molecular dynamics simulations (MD), usually with a crude representation. To unveil the dynamics of cavity geometry evolution, we developed an approach combining cavity detection and Principal Component Analysis (PCA). This approach was applied to four systems subjected to MD (lysozyme, sperm whale myoglobin, Dengue envelope protein and EF-CaM complex). PCA on cavities allows us to perform efficient analysis and classification of the geometry diversity explored by a cavity. Additionally, it reveals correlations between the evolutions of the cavities and structures, and can even suggest how to modify the protein conformation to induce a given cavity geometry. It also helps to perform fast and consensual clustering of conformations according to cavity geometry. Finally, using this approach, we show that both carbon monoxide (CO) location and transfer among the different xenon sites of myoglobin are correlated with few cavity evolution modes of high amplitude. This correlation illustrates the link between ligand diffusion and the dynamic network of internal cavities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
TensorCalculator: exploring the evolution of mechanical stress in the CCMV capsid
NASA Astrophysics Data System (ADS)
Kononova, Olga; Maksudov, Farkhad; Marx, Kenneth A.; Barsegov, Valeri
2018-01-01
A new computational methodology for the accurate numerical calculation of the Cauchy stress tensor, stress invariants, principal stress components, von Mises and Tresca tensors is developed. The methodology is based on the atomic stress approach which permits the calculation of stress tensors, widely used in continuum mechanics modeling of materials properties, using the output from the MD simulations of discrete atomic and C_α -based coarse-grained structural models of biological particles. The methodology mapped into the software package TensorCalculator was successfully applied to the empty cowpea chlorotic mottle virus (CCMV) shell to explore the evolution of mechanical stress in this mechanically-tested specific example of a soft virus capsid. We found an inhomogeneous stress distribution in various portions of the CCMV structure and stress transfer from one portion of the virus structure to another, which also points to the importance of entropic effects, often ignored in finite element analysis and elastic network modeling. We formulate a criterion for elastic deformation using the first principal stress components. Furthermore, we show that von Mises and Tresca stress tensors can be used to predict the onset of a viral capsid’s mechanical failure, which leads to total structural collapse. TensorCalculator can be used to study stress evolution and dynamics of defects in viral capsids and other large-size protein assemblies.
NASA Astrophysics Data System (ADS)
Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu
2018-02-01
The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.
Fractal analysis of scatter imaging signatures to distinguish breast pathologies
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.
2013-02-01
Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.
NASA Astrophysics Data System (ADS)
Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.
2017-06-01
The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.
Information extraction from multivariate images
NASA Technical Reports Server (NTRS)
Park, S. K.; Kegley, K. A.; Schiess, J. R.
1986-01-01
An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.
Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid
2016-10-01
In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.
Principal Component Clustering Approach to Teaching Quality Discriminant Analysis
ERIC Educational Resources Information Center
Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan
2016-01-01
Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…
Analysis of the principal component algorithm in phase-shifting interferometry.
Vargas, J; Quiroga, J Antonio; Belenguer, T
2011-06-15
We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.
Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Strain rate orientations near the Coso Geothermal Field
NASA Astrophysics Data System (ADS)
Ogasa, N. T.; Kaven, J. O.; Barbour, A. J.; von Huene, R.
2016-12-01
Many geothermal reservoirs derive their sustained capacity for heat exchange in large part due to continuous deformation of preexisting faults and fractures that permit permeability to be maintained. Similarly, enhanced geothermal systems rely on the creation of suitable permeability from fracture and faults networks to be viable. Stress measurements from boreholes or earthquake source mechanisms are commonly used to infer the tectonic conditions that drive deformation, but here we show that geodetic data can also be used. Specifically, we quantify variations in the horizontal strain rate tensor in the area surrounding the Coso Geothermal Field (CGF) by analyzing more than two decades of high accuracy differential GPS data from a network of 14 stations from the University of Nevada Reno Geodetic Laboratory. To handle offsets in the data, from equipment changes and coseismic deformation, we segment the data, perform a piecewise linear fit and take the average of each segment's strain rate to determine secular velocities at each station. With respect to North America, all stations tend to travel northwest at velocities ranging from 1 to 10 mm/yr. The nearest station to CGF shows anomalous motion compared to regional stations, which otherwise show a coherent increase in network velocity from the northeast to the southwest. We determine strain rates via linear approximation using GPS velocities in Cartesian reference frame due to the small area of our network. Principal strain rate components derived from this inversion show maximum extensional strain rates of 30 nanostrain/a occur at N87W with compressional strain rates of 37nanostrain/a at N3E. These results generally align with previous stress measurements from borehole breakouts, which indicate the least compressive horizontal principal stress is east-west oriented, and indicative of the basin and range tectonic setting. Our results suggest that the CGF represents an anomaly in the crustal deformation field, which may be influenced by the hydrothermal anomaly and possibly by the geothermal reservoir operations as well.
The North Alabama Lightning Mapping Array (LMA): A Network Overview
NASA Technical Reports Server (NTRS)
Blakeslee, R. J.; Bailey, J.; Buechler, D.; Goodman, S. J.; McCaul, E. W., Jr.; Hall, J.
2005-01-01
The North Alabama Lightning Mapping Array (LMA) is s a 3-D VHF regional lightning detection system that provides on-orbit algorithm validation and instrument performance assessments for the NASA Lightning Imaging Sensor, as well as information on storm kinematics and updraft evolution that offers the potential to improve severe storm warning lead time by up t o 50% and decrease te false alarm r a t e ( for non-tornado producing storms). In support of this latter function, the LMA serves as a principal component of a severe weather test bed to infuse new science and technology into the short-term forecasting of severe and hazardous weather, principally within nearby National Weather Service forecast offices. The LMA, which became operational i n November 2001, consists of VHF receivers deployed across northern Alabama and a base station located at the National Space Science and Technology Center (NSSTC), which is on t h e campus of the University of Alabama in Huntsville. The LMA system locates the sources of impulsive VHF radio signals s from lightning by accurately measuring the time that the signals aririve at the different receiving stations. Each station's records the magnitude and time of the peak lightning radiation signal in successive 80 ms intervals within a local unused television channel (channel 5, 76-82 MHz in our case ) . Typically hundreds of sources per flash can be reconstructed, which i n t u r n produces accurate 3-dimensional lightning image maps (nominally <50 m error within 150 la. range). The data are transmitted back t o a base station using 2.4 GHz wireless Ethernet data links and directional parabolic grid antennas. There are four repeaters in the network topology and the links have an effective data throughput rate ranging from 600 kbits s -1 t o 1.5 %its s -1. This presentation provides an overview of t h e North Alabama network, the data processing (both real-time and post processing) and network statistics.
Dixit, Anshuman; Verkhivker, Gennady M.
2012-01-01
Deciphering functional mechanisms of the Hsp90 chaperone machinery is an important objective in cancer biology aiming to facilitate discovery of targeted anti-cancer therapies. Despite significant advances in understanding structure and function of molecular chaperones, organizing molecular principles that control the relationship between conformational diversity and functional mechanisms of the Hsp90 activity lack a sufficient quantitative characterization. We combined molecular dynamics simulations, principal component analysis, the energy landscape model and structure-functional analysis of Hsp90 regulatory interactions to systematically investigate functional dynamics of the molecular chaperone. This approach has identified a network of conserved regions common to the Hsp90 chaperones that could play a universal role in coordinating functional dynamics, principal collective motions and allosteric signaling of Hsp90. We have found that these functional motifs may be utilized by the molecular chaperone machinery to act collectively as central regulators of Hsp90 dynamics and activity, including the inter-domain communications, control of ATP hydrolysis, and protein client binding. These findings have provided support to a long-standing assertion that allosteric regulation and catalysis may have emerged via common evolutionary routes. The interaction networks regulating functional motions of Hsp90 may be determined by the inherent structural architecture of the molecular chaperone. At the same time, the thermodynamics-based “conformational selection” of functional states is likely to be activated based on the nature of the binding partner. This mechanistic model of Hsp90 dynamics and function is consistent with the notion that allosteric networks orchestrating cooperative protein motions can be formed by evolutionary conserved and sparsely connected residue clusters. Hence, allosteric signaling through a small network of distantly connected residue clusters may be a rather general functional requirement encoded across molecular chaperones. The obtained insights may be useful in guiding discovery of allosteric Hsp90 inhibitors targeting protein interfaces with co-chaperones and protein binding clients. PMID:22624053
A principal components model of soundscape perception.
Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta
2010-11-01
There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.
Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli
2006-09-01
A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.
Colorimetric Sensor Array for White Wine Tasting.
Chung, Soo; Park, Tu San; Park, Soo Hyun; Kim, Joon Yong; Park, Seongmin; Son, Daesik; Bae, Young Min; Cho, Seong In
2015-07-24
A colorimetric sensor array was developed to characterize and quantify the taste of white wines. A charge-coupled device (CCD) camera captured images of the sensor array from 23 different white wine samples, and the change in the R, G, B color components from the control were analyzed by principal component analysis. Additionally, high performance liquid chromatography (HPLC) was used to analyze the chemical components of each wine sample responsible for its taste. A two-dimensional score plot was created with 23 data points. It revealed clusters created from the same type of grape, and trends of sweetness, sourness, and astringency were mapped. An artificial neural network model was developed to predict the degree of sweetness, sourness, and astringency of the white wines. The coefficients of determination (R2) for the HPLC results and the sweetness, sourness, and astringency were 0.96, 0.95, and 0.83, respectively. This research could provide a simple and low-cost but sensitive taste prediction system, and, by helping consumer selection, will be able to have a positive effect on the wine industry.
Colorimetric Sensor Array for White Wine Tasting
Chung, Soo; Park, Tu San; Park, Soo Hyun; Kim, Joon Yong; Park, Seongmin; Son, Daesik; Bae, Young Min; Cho, Seong In
2015-01-01
A colorimetric sensor array was developed to characterize and quantify the taste of white wines. A charge-coupled device (CCD) camera captured images of the sensor array from 23 different white wine samples, and the change in the R, G, B color components from the control were analyzed by principal component analysis. Additionally, high performance liquid chromatography (HPLC) was used to analyze the chemical components of each wine sample responsible for its taste. A two-dimensional score plot was created with 23 data points. It revealed clusters created from the same type of grape, and trends of sweetness, sourness, and astringency were mapped. An artificial neural network model was developed to predict the degree of sweetness, sourness, and astringency of the white wines. The coefficients of determination (R2) for the HPLC results and the sweetness, sourness, and astringency were 0.96, 0.95, and 0.83, respectively. This research could provide a simple and low-cost but sensitive taste prediction system, and, by helping consumer selection, will be able to have a positive effect on the wine industry. PMID:26213946
Rapid test for the detection of hazardous microbiological material
NASA Astrophysics Data System (ADS)
Mordmueller, Mario; Bohling, Christian; John, Andreas; Schade, Wolfgang
2009-09-01
After attacks with anthrax pathogens have been committed since 2001 all over the world the fast detection and determination of biological samples has attracted interest. A very promising method for a rapid test is Laser Induced Breakdown Spectroscopy (LIBS). LIBS is an optical method which uses time-resolved or time-integrated spectral analysis of optical plasma emission after pulsed laser excitation. Even though LIBS is well established for the determination of metals and other inorganic materials the analysis of microbiological organisms is difficult due to their very similar stoichiometric composition. To analyze similar LIBS-spectra computer assisted chemometrics is a very useful approach. In this paper we report on first results of developing a compact and fully automated rapid test for the detection of hazardous microbiological material. Experiments have been carried out with two setups: A bulky one which is composed of standard laboratory components and a compact one consisting of miniaturized industrial components. Both setups work at an excitation wavelength of λ=1064nm (Nd:YAG). Data analysis is done by Principal Component Analysis (PCA) with an adjacent neural network for fully automated sample identification.
Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Miura, Naoki; Akitsuki, Yuko; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta
2006-10-01
Multiple brain networks may support visual self-recognition. It has been hypothesized that the left ventral occipito-temporal cortex processes one's own face as a symbol, and the right parieto-frontal network processes self-image in association with motion-action contingency. Using functional magnetic resonance imaging, we first tested these hypotheses based on the prediction that these networks preferentially respond to a static self-face and to moving one's whole body, respectively. Brain activation specifically related to self-image during familiarity judgment was compared across four stimulus conditions comprising a two factorial design: factor Motion contrasted picture (Picture) and movie (Movie), and factor Body part a face (Face) and whole body (Body). Second, we attempted to segregate self-specific networks using a principal component analysis (PCA), assuming an independent pattern of inter-subject variability in activation over the four stimulus conditions in each network. The bilateral ventral occipito-temporal and the right parietal and frontal cortices exhibited self-specific activation. The left ventral occipito-temporal cortex exhibited greater self-specific activation for Face than for Body, in Picture, consistent with the prediction for this region. The activation profiles of the right parietal and frontal cortices did not show preference for Movie Body predicted by the assumed roles of these regions. The PCA extracted two cortical networks, one with its peaks in the right posterior, and another in frontal cortices; their possible roles in visuo-spatial and conceptual self-representations, respectively, were suggested by previous findings. The results thus supported and provided evidence of multiple brain networks for visual self-recognition.
Ramsey, Lenny; Rengachary, Jennifer; Zinn, Kristi; Siegel, Joshua S.; Metcalf, Nicholas V.; Strube, Michael J.; Snyder, Abraham Z.; Corbetta, Maurizio; Shulman, Gordon L.
2016-01-01
Strokes often cause multiple behavioural deficits that are correlated at the population level. Here, we show that motor and attention deficits are selectively associated with abnormal patterns of resting state functional connectivity in the dorsal attention and motor networks. We measured attention and motor deficits in 44 right hemisphere-damaged patients with a first-time stroke at 1–2 weeks post-onset. The motor battery included tests that evaluated deficits in both upper and lower extremities. The attention battery assessed both spatial and non-spatial attention deficits. Summary measures for motor and attention deficits were identified through principal component analyses on the raw behavioural scores. Functional connectivity in structurally normal cortex was estimated based on the temporal correlation of blood oxygenation level-dependent signals measured at rest with functional magnetic resonance imaging. Any correlation between motor and attention deficits and between functional connectivity in the dorsal attention network and motor networks that might spuriously affect the relationship between each deficit and functional connectivity was statistically removed. We report a double dissociation between abnormal functional connectivity patterns and attention and motor deficits, respectively. Attention deficits were significantly more correlated with abnormal interhemispheric functional connectivity within the dorsal attention network than motor networks, while motor deficits were significantly more correlated with abnormal interhemispheric functional connectivity patterns within the motor networks than dorsal attention network. These findings indicate that functional connectivity patterns in structurally normal cortex following a stroke link abnormal physiology in brain networks to the corresponding behavioural deficits. PMID:27225794
Detection of a novel, integrative aging process suggests complex physiological integration.
Cohen, Alan A; Milot, Emmanuel; Li, Qing; Bergeron, Patrick; Poirier, Roxane; Dusseault-Bélanger, Francis; Fülöp, Tamàs; Leroux, Maxime; Legault, Véronique; Metter, E Jeffrey; Fried, Linda P; Ferrucci, Luigi
2015-01-01
Many studies of aging examine biomarkers one at a time, but complex systems theory and network theory suggest that interpretations of individual markers may be context-dependent. Here, we attempted to detect underlying processes governing the levels of many biomarkers simultaneously by applying principal components analysis to 43 common clinical biomarkers measured longitudinally in 3694 humans from three longitudinal cohort studies on two continents (Women's Health and Aging I & II, InCHIANTI, and the Baltimore Longitudinal Study on Aging). The first axis was associated with anemia, inflammation, and low levels of calcium and albumin. The axis structure was precisely reproduced in all three populations and in all demographic sub-populations (by sex, race, etc.); we call the process represented by the axis "integrated albunemia." Integrated albunemia increases and accelerates with age in all populations, and predicts mortality and frailty--but not chronic disease--even after controlling for age. This suggests a role in the aging process, though causality is not yet clear. Integrated albunemia behaves more stably across populations than its component biomarkers, and thus appears to represent a higher-order physiological process emerging from the structure of underlying regulatory networks. If this is correct, detection of this process has substantial implications for physiological organization more generally.
Role of Basal Ganglia Circuits in Resisting Interference by Distracters: A swLORETA Study
Bocquillon, Perrine; Bourriez, Jean-Louis; Palmero-Soler, Ernesto; Destée, Alain; Defebvre, Luc; Derambure, Philippe; Dujardin, Kathy
2012-01-01
Background The selection of task-relevant information requires both the focalization of attention on the task and resistance to interference from irrelevant stimuli. Both mechanisms rely on a dorsal frontoparietal network, while focalization additionally involves a ventral frontoparietal network. The role of subcortical structures in attention is less clear, despite the fact that the striatum interacts significantly with the frontal cortex via frontostriatal loops. One means of investigating the basal ganglia's contributions to attention is to examine the features of P300 components (i.e. amplitude, latency, and generators) in patients with basal ganglia damage (such as in Parkinson's disease (PD), in which attention is often impaired). Three-stimulus oddball paradigms can be used to study distracter-elicited and target-elicited P300 subcomponents. Methodology/Principal Findings In order to compare distracter- and target-elicited P300 components, high-density (128-channel) electroencephalograms were recorded during a three-stimulus visual oddball paradigm in 15 patients with early PD and 15 matched healthy controls. For each subject, the P300 sources were localized using standardized weighted low-resolution electromagnetic tomography (swLORETA). Comparative analyses (one-sample and two-sample t-tests) were performed using SPM5® software. The swLORETA analyses showed that PD patients displayed fewer dorsolateral prefrontal (DLPF) distracter-P300 generators but no significant differences in target-elicited P300 sources; this suggests dysfunction of the DLPF cortex when the executive frontostriatal loop is disrupted by basal ganglia damage. Conclusions/Significance Our results suggest that the cortical attention frontoparietal networks (mainly the dorsal one) are modulated by the basal ganglia. Disruption of this network in PD impairs resistance to distracters, which results in attention disorders. PMID:22470542
Initial experience with a radiology imaging network to newborn and intensive care units.
Witt, R M; Cohen, M D; Appledorn, C R
1991-02-01
A digital image network has been installed in the James Whitcomb Riley Hospital for Children on the Indiana University Medical Center to create a limited all digital imaging system. The system is composed of commercial components, Philips/AT&T CommView system, (Philips Medical Systems, Shelton, CT; AT&T Bell Laboratories, West Long Beach, NJ) and connects an existing Philips Computed Radiology (PCR) system to two remote workstations that reside in the intensive care unit and the newborn nursery. The purpose of the system is to display images obtained from the PCR system on the remote workstations for direct viewing by referring clinicians, and to reduce many of their visits to the radiology reading room three floors away. The design criteria includes the ability to centrally control all image management functions on the remote workstations to relieve the clinicians from any image management tasks except for recalling patient images. The principal components of the system are the Philips PCR system, the acquisition module (AM), and the PCR interface to the Data Management Module (DMM). Connected to the DMM are an Enhanced Graphics Display Workstation (EGDW), an optical disk drive, and a network gateway to an ethernet link. The ethernet network is the connection to the two Results Viewing Stations (RVS) and both RVSs are approximately 100 m from the gateway. The DMM acts as an image file server and an image archive device. The DMM manages the image data base and can load images to the EGDW and the two RVSs. The system has met the initial design specifications and can successfully capture images from the PCR and direct them to the RVSs.(ABSTRACT TRUNCATED AT 250 WORDS)
He, Min; Cao, Dong-Sheng; Liang, Yi-Zeng; Li, Ya-Ping; Liu, Ping-Le; Xu, Qing-Song; Huang, Ren-Bin
2013-10-01
In this study, a method was applied to evaluate pressor mechanisms through compound-protein interactions. Our method assumed that the compounds with different pressor mechanisms should bind to different target proteins, and thereby these mechanisms could be differentiated using compound-protein interactions. Twenty-six phytochemical components and 46 tested target proteins related to blood pressure (BP) elevation were collected. Then, in silico compound-protein interactions prediction probabilities were calculated using a random forest model, which have been implemented in a web server, and the credibility was judged using related literature and other methods. Further, a heat map was constructed, it clearly showed different prediction probabilities accompanied with hierarchical clustering analysis results. Followed by a compound-protein interaction network was depicted according to the results, we can see the connectivity layout of phytochemical components with different target proteins within the BP elevation network, which guided the hypothesis generation of poly-pharmacology. Lastly, principal components analysis (PCA) was carried out upon the prediction probabilities, and pressor targets could be divided into three large classes: neurotransmitter receptors, hormones receptors and monoamine oxidases. In addition, steroid glycosides seem to be close to the region of hormone receptors, and a weak difference existed between them. This work explored the possibility for pharmacological or toxicological mechanism classification using compound-protein interactions. Such approaches could also be used to deduce pharmacological or toxicological mechanisms for uncharacterized compounds. Copyright © 2013 Elsevier Inc. All rights reserved.
Distinctive fingerprints of erosional regimes in terrestrial channel networks
NASA Astrophysics Data System (ADS)
Grau Galofre, A.; Jellinek, M.
2017-12-01
Satellite imagery and digital elevation maps capture the large scale morphology of channel networks attributed to long term erosional processes, such as fluvial, glacial, groundwater sapping and subglacial erosion. Characteristic morphologies associated with each of these styles of erosion have been studied in detail, but there exists a knowledge gap related to their parameterization and quantification. This knowledge gap prevents a rigorous analysis of the dominant processes that shaped a particular landscape, and a comparison across styles of erosion. To address this gap, we use previous morphological descriptions of glaciers, rivers, sapping valleys and tunnel valleys to identify and measure quantitative metrics diagnostic of these distinctive styles of erosion. From digital elevation models, we identify four geometric metrics: The minimum channel width, channel aspect ratio (longest length to channel width at the outlet), presence of undulating longitudinal profiles, and tributary junction angle. We also parameterize channel network complexity in terms of its stream order and fractal dimension. We then perform a statistical classification of the channel networks using a Principal Component Analysis on measurements of these six metrics on a dataset of 70 channelized systems. We show that rivers, glaciers, groundwater seepage and subglacial meltwater erode the landscape in rigorously distinguishable ways. Our methodology can more generally be applied to identify the contributions of different processes involved in carving a channel network. In particular, we are able to identify transitions from fluvial to glaciated landscapes or vice-versa.
Online dimensionality reduction using competitive learning and Radial Basis Function network.
Tomenko, Vladimir
2011-06-01
The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Arterial spin labelling reveals an abnormal cerebral perfusion pattern in Parkinson's disease.
Melzer, Tracy R; Watts, Richard; MacAskill, Michael R; Pearson, John F; Rüeger, Sina; Pitcher, Toni L; Livingston, Leslie; Graham, Charlotte; Keenan, Ross; Shankaranarayanan, Ajit; Alsop, David C; Dalrymple-Alford, John C; Anderson, Tim J
2011-03-01
There is a need for objective imaging markers of Parkinson's disease status and progression. Positron emission tomography and single photon emission computed tomography studies have suggested patterns of abnormal cerebral perfusion in Parkinson's disease as potential functional biomarkers. This study aimed to identify an arterial spin labelling magnetic resonance-derived perfusion network as an accessible, non-invasive alternative. We used pseudo-continuous arterial spin labelling to measure cerebral grey matter perfusion in 61 subjects with Parkinson's disease with a range of motor and cognitive impairment, including patients with dementia and 29 age- and sex-matched controls. Principal component analysis was used to derive a Parkinson's disease-related perfusion network via logistic regression. Region of interest analysis of absolute perfusion values revealed that the Parkinson's disease pattern was characterized by decreased perfusion in posterior parieto-occipital cortex, precuneus and cuneus, and middle frontal gyri compared with healthy controls. Perfusion was preserved in globus pallidus, putamen, anterior cingulate and post- and pre-central gyri. Both motor and cognitive statuses were significant factors related to network score. A network approach, supported by arterial spin labelling-derived absolute perfusion values may provide a readily accessible neuroimaging method to characterize and track progression of both motor and cognitive status in Parkinson's disease.
Choi, D J; Park, H
2001-11-01
For control and automation of biological treatment processes, lack of reliable on-line sensors to measure water quality parameters is one of the most important problems to overcome. Many parameters cannot be measured directly with on-line sensors. The accuracy of existing hardware sensors is also not sufficient and maintenance problems such as electrode fouling often cause trouble. This paper deals with the development of software sensor techniques that estimate the target water quality parameter from other parameters using the correlation between water quality parameters. We focus our attention on the preprocessing of noisy data and the selection of the best model feasible to the situation. Problems of existing approaches are also discussed. We propose a hybrid neural network as a software sensor inferring wastewater quality parameter. Multivariate regression, artificial neural networks (ANN), and a hybrid technique that combines principal component analysis as a preprocessing stage are applied to data from industrial wastewater processes. The hybrid ANN technique shows an enhancement of prediction capability and reduces the overfitting problem of neural networks. The result shows that the hybrid ANN technique can be used to extract information from noisy data and to describe the nonlinearity of complex wastewater treatment processes.
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-28
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide-ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
NASA Astrophysics Data System (ADS)
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-01
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
SAS program for quantitative stratigraphic correlation by principal components
Hohn, M.E.
1985-01-01
A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.
NASA Astrophysics Data System (ADS)
Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.
2018-05-01
An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.
A novel principal component analysis for spatially misaligned multivariate air pollution data.
Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A
2017-01-01
We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.
Principals' Perceptions of Collegial Support as a Component of Administrative Inservice.
ERIC Educational Resources Information Center
Daresh, John C.
To address the problem of increasing professional isolation of building administrators, the Principals' Inservice Project helps establish principals' collegial support groups across the nation. The groups are typically composed of 6 to 10 principals who meet at least once each month over a 2-year period. One collegial support group of seven…
Training the Trainers: Learning to Be a Principal Supervisor
ERIC Educational Resources Information Center
Saltzman, Amy
2017-01-01
While most principal supervisors are former principals themselves, few come to the role with specific training in how to do the job effectively. For this reason, both the Washington, D.C., and Tulsa, Oklahoma, principal supervisor programs include a strong professional development component. In this article, the author takes a look inside these…
ERIC Educational Resources Information Center
Rodrigue, Christine M.
2011-01-01
This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…
NASA Astrophysics Data System (ADS)
Orellana, Laura; Yoluk, Ozge; Carrillo, Oliver; Orozco, Modesto; Lindahl, Erik
2016-08-01
Protein conformational changes are at the heart of cell functions, from signalling to ion transport. However, the transient nature of the intermediates along transition pathways hampers their experimental detection, making the underlying mechanisms elusive. Here we retrieve dynamic information on the actual transition routes from principal component analysis (PCA) of structurally-rich ensembles and, in combination with coarse-grained simulations, explore the conformational landscapes of five well-studied proteins. Modelling them as elastic networks in a hybrid elastic-network Brownian dynamics simulation (eBDIMS), we generate trajectories connecting stable end-states that spontaneously sample the crystallographic motions, predicting the structures of known intermediates along the paths. We also show that the explored non-linear routes can delimit the lowest energy passages between end-states sampled by atomistic molecular dynamics. The integrative methodology presented here provides a powerful framework to extract and expand dynamic pathway information from the Protein Data Bank, as well as to validate sampling methods in general.
A systematic study of chemogenomics of carbohydrates.
Gu, Jiangyong; Luo, Fang; Chen, Lirong; Yuan, Gu; Xu, Xiaojie
2014-03-04
Chemogenomics focuses on the interactions between biologically active molecules and protein targets for drug discovery. Carbohydrates are the most abundant compounds in natural products. Compared with other drugs, the carbohydrate drugs show weaker side effects. Searching for multi-target carbohydrate drugs can be regarded as a solution to improve therapeutic efficacy and safety. In this work, we collected 60 344 carbohydrates from the Universal Natural Products Database (UNPD) and explored the chemical space of carbohydrates by principal component analysis. We found that there is a large quantity of potential lead compounds among carbohydrates. Then we explored the potential of carbohydrates in drug discovery by using a network-based multi-target computational approach. All carbohydrates were docked to 2389 target proteins. The most potential carbohydrates for drug discovery and their indications were predicted based on a docking score-weighted prediction model. We also explored the interactions between carbohydrates and target proteins to find the pathological networks, potential drug candidates and new indications.
Multisensor system for toxic gases detection generated on indoor environments
NASA Astrophysics Data System (ADS)
Durán, C. M.; Monsalve, P. A. G.; Mosquera, C. J.
2016-11-01
This work describes a wireless multisensory system for different toxic gases detection generated on indoor environments (i.e., Underground coal mines, etc.). The artificial multisensory system proposed in this study was developed through a set of six chemical gas sensors (MQ) of low cost with overlapping sensitivities to detect hazardous gases in the air. A statistical parameter was implemented to the data set and two pattern recognition methods such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA) were used for feature selection. The toxic gases categories were classified with a Probabilistic Neural Network (PNN) in order to validate the results previously obtained. The tests were carried out to verify feasibility of the application through a wireless communication model which allowed to monitor and store the information of the sensor signals for the appropriate analysis. The success rate in the measures discrimination was 100%, using an artificial neural network where leave-one-out was used as cross validation method.
Automotive System for Remote Surface Classification.
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-04-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.
Nonlinear features for classification and pose estimation of machined parts from single views
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-10-01
A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Automotive System for Remote Surface Classification
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-01-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297
Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model
Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.
2008-01-01
This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less
A review of machine learning in obesity.
DeGregory, K W; Kuiper, P; DeSilvio, T; Pleuss, J D; Miller, R; Roginski, J W; Fisher, C B; Harness, D; Viswanath, S; Heymsfield, S B; Dungan, I; Thomas, D M
2018-05-01
Rich sources of obesity-related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high-level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity. © 2018 World Obesity Federation.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
ERIC Educational Resources Information Center
Ackermann, Margot Elise; Morrow, Jennifer Ann
2008-01-01
The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.
2015-11-01
The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.
Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis
NASA Astrophysics Data System (ADS)
Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.
2013-06-01
Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.
2007-01-01
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2012 CFR
2012-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2014 CFR
2014-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
40 CFR 60.1580 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines..., 1999 Use of Model Rule § 60.1580 What are the principal components of the model rule? The model rule...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...
Students' Perceptions of Teaching and Learning Practices: A Principal Component Approach
ERIC Educational Resources Information Center
Mukorera, Sophia; Nyatanga, Phocenah
2017-01-01
Students' attendance and engagement with teaching and learning practices is perceived as a critical element for academic performance. Even with stipulated attendance policies, students still choose not to engage. The study employed a principal component analysis to analyze first- and second-year students' perceptions of the importance of the 12…
ERIC Educational Resources Information Center
Hunley-Jenkins, Keisha Janine
2012-01-01
This qualitative study explores large, urban, mid-western principal perspectives about cyberbullying and the policy components and practices that they have found effective and ineffective at reducing its occurrence and/or negative effect on their schools' learning environments. More specifically, the researcher was interested in learning more…
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
Applications of Nonlinear Principal Components Analysis to Behavioral Data.
ERIC Educational Resources Information Center
Hicks, Marilyn Maginley
1981-01-01
An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)
ERIC Educational Resources Information Center
Hendrix, Dean
2010-01-01
This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…
Principal component analysis for protein folding dynamics.
Maisuradze, Gia G; Liwo, Adam; Scheraga, Harold A
2009-01-09
Protein folding is considered here by studying the dynamics of the folding of the triple beta-strand WW domain from the Formin-binding protein 28. Starting from the unfolded state and ending either in the native or nonnative conformational states, trajectories are generated with the coarse-grained united residue (UNRES) force field. The effectiveness of principal components analysis (PCA), an already established mathematical technique for finding global, correlated motions in atomic simulations of proteins, is evaluated here for coarse-grained trajectories. The problems related to PCA and their solutions are discussed. The folding and nonfolding of proteins are examined with free-energy landscapes. Detailed analyses of many folding and nonfolding trajectories at different temperatures show that PCA is very efficient for characterizing the general folding and nonfolding features of proteins. It is shown that the first principal component captures and describes in detail the dynamics of a system. Anomalous diffusion in the folding/nonfolding dynamics is examined by the mean-square displacement (MSD) and the fractional diffusion and fractional kinetic equations. The collisionless (or ballistic) behavior of a polypeptide undergoing Brownian motion along the first few principal components is accounted for.
Dynamic of consumer groups and response of commodity markets by principal component analysis
NASA Astrophysics Data System (ADS)
Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo
2017-09-01
This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao
2011-01-01
The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
Hyperspectral optical imaging of human iris in vivo: characteristics of reflectance spectra
NASA Astrophysics Data System (ADS)
Medina, José M.; Pereira, Luís M.; Correia, Hélder T.; Nascimento, Sérgio M. C.
2011-07-01
We report a hyperspectral imaging system to measure the reflectance spectra of real human irises with high spatial resolution. A set of ocular prosthesis was used as the control condition. Reflectance data were decorrelated by the principal-component analysis. The main conclusion is that spectral complexity of the human iris is considerable: between 9 and 11 principal components are necessary to account for 99% of the cumulative variance in human irises. Correcting image misalignments associated with spontaneous ocular movements did not influence this result. The data also suggests a correlation between the first principal component and different levels of melanin present in the irises. It was also found that although the spectral characteristics of the first five principal components were not affected by the radial and angular position of the selected iridal areas, they affect the higher-order ones, suggesting a possible influence of the iris texture. The results show that hyperspectral imaging in the iris, together with adequate spectroscopic analyses provide more information than conventional colorimetric methods, making hyperspectral imaging suitable for the characterization of melanin and the noninvasive diagnosis of ocular diseases and iris color.
Seeing wholes: The concept of systems thinking and its implementation in school leadership
NASA Astrophysics Data System (ADS)
Shaked, Haim; Schechter, Chen
2013-12-01
Systems thinking (ST) is an approach advocating thinking about any given issue as a whole, emphasising the interrelationships between its components rather than the components themselves. This article aims to link ST and school leadership, claiming that ST may enable school principals to develop highly performing schools that can cope successfully with current challenges, which are more complex than ever before in today's era of accountability and high expectations. The article presents the concept of ST - its definition, components, history and applications. Thereafter, its connection to education and its contribution to school management are described. The article concludes by discussing practical processes including screening for ST-skilled principal candidates and developing ST skills among prospective and currently performing school principals, pinpointing three opportunities for skills acquisition: during preparatory programmes; during their first years on the job, supported by veteran school principals as mentors; and throughout their entire career. Such opportunities may not only provide school principals with ST skills but also improve their functioning throughout the aforementioned stages of professional development.
A modified procedure for mixture-model clustering of regional geochemical data
Ellefsen, Karl J.; Smith, David B.; Horton, John D.
2014-01-01
A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Takagi, Daisuke; Ikeda, Ken'ichi; Kawachi, Ichiro
2012-11-01
Crime is an important determinant of public health outcomes, including quality of life, mental well-being, and health behavior. A body of research has documented the association between community social capital and crime victimization. The association between social capital and crime victimization has been examined at multiple levels of spatial aggregation, ranging from entire countries, to states, metropolitan areas, counties, and neighborhoods. In multilevel analysis, the spatial boundaries at level 2 are most often drawn from administrative boundaries (e.g., Census tracts in the U.S.). One problem with adopting administrative definitions of neighborhoods is that it ignores spatial spillover. We conducted a study of social capital and crime victimization in one ward of Tokyo city, using a spatial Durbin model with an inverse-distance weighting matrix that assigned each respondent a unique level of "exposure" to social capital based on all other residents' perceptions. The study is based on a postal questionnaire sent to 20-69 years old residents of Arakawa Ward, Tokyo. The response rate was 43.7%. We examined the contextual influence of generalized trust, perceptions of reciprocity, two types of social network variables, as well as two principal components of social capital (constructed from the above four variables). Our outcome measure was self-reported crime victimization in the last five years. In the spatial Durbin model, we found that neighborhood generalized trust, reciprocity, supportive networks and two principal components of social capital were each inversely associated with crime victimization. By contrast, a multilevel regression performed with the same data (using administrative neighborhood boundaries) found generally null associations between neighborhood social capital and crime. Spatial regression methods may be more appropriate for investigating the contextual influence of social capital in homogeneous cultural settings such as Japan. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Storrie-Lombardi, Michael C.; Hoover, Richard B.
2005-01-01
Last year we presented techniques for the detection of fossils during robotic missions to Mars using both structural and chemical signatures[Storrie-Lombardi and Hoover, 2004]. Analyses included lossless compression of photographic images to estimate the relative complexity of a putative fossil compared to the rock matrix [Corsetti and Storrie-Lombardi, 2003] and elemental abundance distributions to provide mineralogical classification of the rock matrix [Storrie-Lombardi and Fisk, 2004]. We presented a classification strategy employing two exploratory classification algorithms (Principal Component Analysis and Hierarchical Cluster Analysis) and non-linear stochastic neural network to produce a Bayesian estimate of classification accuracy. We now present an extension of our previous experiments exploring putative fossil forms morphologically resembling cyanobacteria discovered in the Orgueil meteorite. Elemental abundances (C6, N7, O8, Na11, Mg12, Ai13, Si14, P15, S16, Cl17, K19, Ca20, Fe26) obtained for both extant cyanobacteria and fossil trilobites produce signatures readily distinguishing them from meteorite targets. When compared to elemental abundance signatures for extant cyanobacteria Orgueil structures exhibit decreased abundances for C6, N7, Na11, All3, P15, Cl17, K19, Ca20 and increases in Mg12, S16, Fe26. Diatoms and silicified portions of cyanobacterial sheaths exhibiting high levels of silicon and correspondingly low levels of carbon cluster more closely with terrestrial fossils than with extant cyanobacteria. Compression indices verify that variations in random and redundant textural patterns between perceived forms and the background matrix contribute significantly to morphological visual identification. The results provide a quantitative probabilistic methodology for discriminating putatitive fossils from the surrounding rock matrix and &om extant organisms using both structural and chemical information. The techniques described appear applicable to the geobiological analysis of meteoritic samples or in situ exploration of the Mars regolith. Keywords: cyanobacteria, microfossils, Mars, elemental abundances, complexity analysis, multifactor analysis, principal component analysis, hierarchical cluster analysis, artificial neural networks, paleo-biosignatures
A linkage analysis toolkit for studying allosteric networks in ion channels
2013-01-01
A thermodynamic approach to studying allosterically regulated ion channels such as the large-conductance voltage- and Ca2+-dependent (BK) channel is presented, drawing from principles originally introduced to describe linkage phenomena in hemoglobin. In this paper, linkage between a principal channel component and secondary elements is derived from a four-state thermodynamic cycle. One set of parallel legs in the cycle describes the “work function,” or the free energy required to activate the principal component. The second are “lever operations” activating linked elements. The experimental embodiment of this linkage cycle is a plot of work function versus secondary force, whose asymptotes are a function of the parameters (displacements and interaction energies) of an allosteric network. Two essential work functions play a role in evaluating data from voltage-clamp experiments. The first is the conductance Hill energy WH[g], which is a “local” work function for pore activation, and is defined as kT times the Hill transform of the conductance (G-V) curve. The second is the electrical capacitance energy WC[q], representing “global” gating charge displacement, and is equal to the product of total gating charge per channel times the first moment (VM) of normalized capacitance (slope of Q-V curve). Plots of WH[g] and WC[q] versus voltage and Ca2+ potential can be used to measure thermodynamic parameters in a model-independent fashion of the core gating constituents (pore, voltage-sensor, and Ca2+-binding domain) of BK channel. The method is easily generalized for use in studying other allosterically regulated ion channels. The feasibility of performing linkage analysis from patch-clamp data were explored by simulating gating and ionic currents of a 17-particle model BK channel in response to a slow voltage ramp, which yielded interaction energies deviating from their given values in the range of 1.3 to 7.2%. PMID:23250867
Introduction to stream network habitat analysis
Bartholow, John M.; Waddle, Terry J.
1986-01-01
Increasing demands on stream resources by a variety of users have resulted in an increased emphasis on studies that evaluate the cumulative effects of basinwide water management programs. Network habitat analysis refers to the evaluation of an entire river basin (or network) by predicting its habitat response to alternative management regimes. The analysis principally focuses on the biological and hydrological components of the riv er basin, which include both micro- and macrohabitat. (The terms micro- and macrohabitat are further defined and discussed later in this document.) Both conceptual and analytic models are frequently used for simplifying and integrating the various components of the basin. The model predictions can be used in developing management recommendations to preserve, restore, or enhance instream fish habitat. A network habitat analysis should begin with a clear and concise statement of the study objectives and a thorough understanding of the institutional setting in which the study results will be applied. This includes the legal, social, and political considerations inherent in any water management setting. The institutional environment may dictate the focus and level of detail required of the study to a far greater extent than the technical considerations. After the study objectives, including species on interest, and institutional setting are collectively defined, the technical aspects should be scoped to determine the spatial and temporal requirements of the analysis. A macro level approach should be taken first to identify critical biological elements and requirements. Next, habitat availability is quantified much as in a "standard" river segment analysis, with the likely incorporation of some macrohabitat components, such as stream temperature. Individual river segments may be aggregated to represent the networkwide habitat response of alternative water management schemes. Things learned about problems caused or opportunities generated may be fed back to the design of new alternatives, which themselves may be similarly tested. One may get as sophisticated an analysis as the decisionmaking process demands. Figure 1 shows a decision point that asks whether the results from the micro- or macrohabitat models display cumulative or synergistic effects. If they do, then network habitat analysis is the appropriate tool. We are left, however, in a difficult bind. We may not know a priori whether the effects are cumulative or synergistic unless some network-type questions are investigated as part of the scoping process. The next several sections raise issues designed to alert the modeler to relevant questions necessary to address this paradox.
Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A
2008-12-01
It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.
Xiao, Keke; Chen, Yun; Jiang, Xie; Zhou, Yan
2017-03-01
An investigation was conducted for 20 different types of sludge in order to identify the key organic compounds in extracellular polymeric substances (EPS) that are important in assessing variations of sludge filterability. The different types of sludge varied in initial total solids (TS) content, organic composition and pre-treatment methods. For instance, some of the sludges were pre-treated by acid, ultrasonic, thermal, alkaline, or advanced oxidation technique. The Pearson's correlation results showed significant correlations between sludge filterability and zeta potential, pH, dissolved organic carbon, protein and polysaccharide in soluble EPS (SB EPS), loosely bound EPS (LB EPS) and tightly bound EPS (TB EPS). The principal component analysis (PCA) method was used to further explore correlations between variables and similarities among EPS fractions of different types of sludge. Two principal components were extracted: principal component 1 accounted for 59.24% of total EPS variations, while principal component 2 accounted for 25.46% of total EPS variations. Dissolved organic carbon, protein and polysaccharide in LB EPS showed higher eigenvector projection values than the corresponding compounds in SB EPS and TB EPS in principal component 1. Further characterization of fractionized key organic compounds in LB EPS was conducted with size-exclusion chromatography-organic carbon detection-organic nitrogen detection (LC-OCD-OND). A numerical multiple linear regression model was established to describe relationship between organic compounds in LB EPS and sludge filterability. Copyright © 2016 Elsevier Ltd. All rights reserved.
QSAR modeling of flotation collectors using principal components extracted from topological indices.
Natarajan, R; Nirdosh, Inderjit; Basak, Subhash C; Mills, Denise R
2002-01-01
Several topological indices were calculated for substituted-cupferrons that were tested as collectors for the froth flotation of uranium. The principal component analysis (PCA) was used for data reduction. Seven principal components (PC) were found to account for 98.6% of the variance among the computed indices. The principal components thus extracted were used in stepwise regression analyses to construct regression models for the prediction of separation efficiencies (Es) of the collectors. A two-parameter model with a correlation coefficient of 0.889 and a three-parameter model with a correlation coefficient of 0.913 were formed. PCs were found to be better than partition coefficient to form regression equations, and inclusion of an electronic parameter such as Hammett sigma or quantum mechanically derived electronic charges on the chelating atoms did not improve the correlation coefficient significantly. The method was extended to model the separation efficiencies of mercaptobenzothiazoles (MBT) and aminothiophenols (ATP) used in the flotation of lead and zinc ores, respectively. Five principal components were found to explain 99% of the data variability in each series. A three-parameter equation with correlation coefficient of 0.985 and a two-parameter equation with correlation coefficient of 0.926 were obtained for MBT and ATP, respectively. The amenability of separation efficiencies of chelating collectors to QSAR modeling using PCs based on topological indices might lead to the selection of collectors for synthesis and testing from a virtual database.
Akbari, Hamed; Macyszyn, Luke; Da, Xiao; Wolf, Ronald L.; Bilello, Michel; Verma, Ragini; O’Rourke, Donald M.
2014-01-01
Purpose To augment the analysis of dynamic susceptibility contrast material–enhanced magnetic resonance (MR) images to uncover unique tissue characteristics that could potentially facilitate treatment planning through a better understanding of the peritumoral region in patients with glioblastoma. Materials and Methods Institutional review board approval was obtained for this study, with waiver of informed consent for retrospective review of medical records. Dynamic susceptibility contrast-enhanced MR imaging data were obtained for 79 patients, and principal component analysis was applied to the perfusion signal intensity. The first six principal components were sufficient to characterize more than 99% of variance in the temporal dynamics of blood perfusion in all regions of interest. The principal components were subsequently used in conjunction with a support vector machine classifier to create a map of heterogeneity within the peritumoral region, and the variance of this map served as the heterogeneity score. Results The calculated principal components allowed near-perfect separability of tissue that was likely highly infiltrated with tumor and tissue that was unlikely infiltrated with tumor. The heterogeneity map created by using the principal components showed a clear relationship between voxels judged by the support vector machine to be highly infiltrated and subsequent recurrence. The results demonstrated a significant correlation (r = 0.46, P < .0001) between the heterogeneity score and patient survival. The hazard ratio was 2.23 (95% confidence interval: 1.4, 3.6; P < .01) between patients with high and low heterogeneity scores on the basis of the median heterogeneity score. Conclusion Analysis of dynamic susceptibility contrast-enhanced MR imaging data by using principal component analysis can help identify imaging variables that can be subsequently used to evaluate the peritumoral region in glioblastoma. These variables are potentially indicative of tumor infiltration and may become useful tools in guiding therapy, as well as individualized prognostication. © RSNA, 2014 PMID:24955928
Sharing Craft Knowledge: The Soul of Principal Peer Assessment.
ERIC Educational Resources Information Center
Abbott, James E.
1996-01-01
Describes the implementation of a peer assessment process for school principals using a New Skills Profile of essential craft skills: teaching methods, budgetary competence, networking, technological literacy, communication, leadership, conflict resolution, diversity, systems thinking, and Total Quality Management principles. Participating…
Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R
2010-01-01
The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.
Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.
Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang
2018-01-01
This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and compared with a state-of-art method, that is, PICCS, using both simulation and experimental data with the on-board cone-beam CT setting. The results demonstrated the feasibility of PCR for cine CBCT and significantly improved reconstruction quality of PCR from PICCS for cine CBCT. With a priori estimated temporal motion coefficients using fluoroscopic training projections, the PCR method can accurately reconstruct spatial principal components, and then generate cine CT images as a product of temporal motion coefficients and spatial principal components. © 2017 American Association of Physicists in Medicine.
Goekoop, Rutger; Goekoop, Jaap G.; Scholte, H. Steven
2012-01-01
Introduction Human personality is described preferentially in terms of factors (dimensions) found using factor analysis. An alternative and highly related method is network analysis, which may have several advantages over factor analytic methods. Aim To directly compare the ability of network community detection (NCD) and principal component factor analysis (PCA) to examine modularity in multidimensional datasets such as the neuroticism-extraversion-openness personality inventory revised (NEO-PI-R). Methods 434 healthy subjects were tested on the NEO-PI-R. PCA was performed to extract factor structures (FS) of the current dataset using both item scores and facet scores. Correlational network graphs were constructed from univariate correlation matrices of interactions between both items and facets. These networks were pruned in a link-by-link fashion while calculating the network community structure (NCS) of each resulting network using the Wakita Tsurumi clustering algorithm. NCSs were matched against FS and networks of best matches were kept for further analysis. Results At facet level, NCS showed a best match (96.2%) with a ‘confirmatory’ 5-FS. At item level, NCS showed a best match (80%) with the standard 5-FS and involved a total of 6 network clusters. Lesser matches were found with ‘confirmatory’ 5-FS and ‘exploratory’ 6-FS of the current dataset. Network analysis did not identify facets as a separate level of organization in between items and clusters. A small-world network structure was found in both item- and facet level networks. Conclusion We present the first optimized network graph of personality traits according to the NEO-PI-R: a ‘Personality Web’. Such a web may represent the possible routes that subjects can take during personality development. NCD outperforms PCA by producing plausible modularity at item level in non-standard datasets, and can identify the key roles of individual items and clusters in the network. PMID:23284713
Goekoop, Rutger; Goekoop, Jaap G; Scholte, H Steven
2012-01-01
Human personality is described preferentially in terms of factors (dimensions) found using factor analysis. An alternative and highly related method is network analysis, which may have several advantages over factor analytic methods. To directly compare the ability of network community detection (NCD) and principal component factor analysis (PCA) to examine modularity in multidimensional datasets such as the neuroticism-extraversion-openness personality inventory revised (NEO-PI-R). 434 healthy subjects were tested on the NEO-PI-R. PCA was performed to extract factor structures (FS) of the current dataset using both item scores and facet scores. Correlational network graphs were constructed from univariate correlation matrices of interactions between both items and facets. These networks were pruned in a link-by-link fashion while calculating the network community structure (NCS) of each resulting network using the Wakita Tsurumi clustering algorithm. NCSs were matched against FS and networks of best matches were kept for further analysis. At facet level, NCS showed a best match (96.2%) with a 'confirmatory' 5-FS. At item level, NCS showed a best match (80%) with the standard 5-FS and involved a total of 6 network clusters. Lesser matches were found with 'confirmatory' 5-FS and 'exploratory' 6-FS of the current dataset. Network analysis did not identify facets as a separate level of organization in between items and clusters. A small-world network structure was found in both item- and facet level networks. We present the first optimized network graph of personality traits according to the NEO-PI-R: a 'Personality Web'. Such a web may represent the possible routes that subjects can take during personality development. NCD outperforms PCA by producing plausible modularity at item level in non-standard datasets, and can identify the key roles of individual items and clusters in the network.
ERIC Educational Resources Information Center
Lin, Mind-Dih
2012-01-01
Improving principal leadership is a vital component to the success of educational reform initiatives that seek to improve whole-school performance, as principal leadership often exercises positive but indirect effects on student learning. Because of the importance of principals within the field of school improvement, this article focuses on…
ERIC Educational Resources Information Center
Herrmann, Mariesa; Ross, Christine
2016-01-01
States and districts across the country are implementing new principal evaluation systems that include measures of the quality of principals' school leadership practices and measures of student achievement growth. Because these evaluation systems will be used for high-stakes decisions, it is important that the component measures of the evaluation…
ERIC Educational Resources Information Center
Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann; Mette, Ian M.
2015-01-01
This study examined the perspectives of novice and late career principals concerning instructional and organizational leadership within their performance evaluations. An online survey was sent to 251 principals with a return rate of 49%. Instructional leadership components of the evaluation that were most important to all principals were:…
Women-Only (Homophilous) Networks Supporting Women Leaders in Education
ERIC Educational Resources Information Center
Coleman, Marianne
2010-01-01
Purpose: This paper aims to consider what all-women networks have, and might offer, in terms of support and development of women in educational leadership. Design/methodology/approach: The study draws on two case studies of such networks in education in England, the first, a regional network for women secondary school principals, and the other…
1987-10-01
include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen
NASA Astrophysics Data System (ADS)
Wirtz, Hanna; Schäfer, Sarah; Hoberg, Claudius; Havenith, Martina
2018-03-01
We have recorded the THz spectra of the peptides NALA and NAGA as well as the amino acid leucine as model systems for hydrophobic and hydrophilic hydration. The spectra were recorded as a function of temperature and concentration and were analyzed in terms of a principal component analysis approach. NAGA shows positive absorptions with an increasing effective absorption coefficient for increasing concentrations. We conclude that NAGA due to its polar and hydrophilic structure does not have a significant influence on the surrounding water network, but is instead integrated into the water network forming a supramolecular complex. In contrast, for NALA, one hydrogen atom is substituted by a hydrophobic iso-butyl chain. We observe for NALA a decrease in absorption below 1.5 THz and a nonlinearity with a turning point around 0.75 M. Our measurements indicate that the first hydration shell of NALA is still intact at 0.75 M (corresponding to 65 water molecules per NALA). However, for larger concentrations the hydration shells can overlap, which explains the nonlinearity. For leucine, the changes in the spectrum occur at smaller concentrations. This might indicate that leucine exhibits a long-range effect on the solvating water network.
Gaussian Graphical Models Identify Networks of Dietary Intake in a German Adult Population.
Iqbal, Khalid; Buijsse, Brian; Wirth, Janine; Schulze, Matthias B; Floegel, Anna; Boeing, Heiner
2016-03-01
Data-reduction methods such as principal component analysis are often used to derive dietary patterns. However, such methods do not assess how foods are consumed in relation to each other. Gaussian graphical models (GGMs) are a set of novel methods that can address this issue. We sought to apply GGMs to derive sex-specific dietary intake networks representing consumption patterns in a German adult population. Dietary intake data from 10,780 men and 16,340 women of the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort were cross-sectionally analyzed to construct dietary intake networks. Food intake for each participant was estimated using a 148-item food-frequency questionnaire that captured the intake of 49 food groups. GGMs were applied to log-transformed intakes (grams per day) of 49 food groups to construct sex-specific food networks. Semiparametric Gaussian copula graphical models (SGCGMs) were used to confirm GGM results. In men, GGMs identified 1 major dietary network that consisted of intakes of red meat, processed meat, cooked vegetables, sauces, potatoes, cabbage, poultry, legumes, mushrooms, soup, and whole-grain and refined breads. For women, a similar network was identified with the addition of fried potatoes. Other identified networks consisted of dairy products and sweet food groups. SGCGMs yielded results comparable to those of GGMs. GGMs are a powerful exploratory method that can be used to construct dietary networks representing dietary intake patterns that reveal how foods are consumed in relation to each other. GGMs indicated an apparent major role of red meat intake in a consumption pattern in the studied population. In the future, identified networks might be transformed into pattern scores for investigating their associations with health outcomes. © 2016 American Society for Nutrition.
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)
NASA Astrophysics Data System (ADS)
Crowell, B. W.; Bock, Y.; Squibb, M. B.
2010-12-01
Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.
ERIC Educational Resources Information Center
Chou, Yeh-Tai; Wang, Wen-Chung
2010-01-01
Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…
ERIC Educational Resources Information Center
Brusco, Michael J.; Singh, Renu; Steinley, Douglas
2009-01-01
The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.
1984-01-01
A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Lu, Wei-Zhen; Wang, Wen-Jian; Wang, Xie-Kang; Yan, Sui-Hang; Lam, Joseph C
2004-09-01
The forecasting of air pollutant trends has received much attention in recent years. It is an important and popular topic in environmental science, as concerns have been raised about the health impacts caused by unacceptable ambient air pollutant levels. Of greatest concern are metropolitan cities like Hong Kong. In Hong Kong, respirable suspended particulates (RSP), nitrogen oxides (NOx), and nitrogen dioxide (NO2) are major air pollutants due to the dominant usage of diesel fuel by commercial vehicles and buses. Hence, the study of the influence and the trends relating to these pollutants is extremely significant to the public health and the image of the city. The use of neural network techniques to predict trends relating to air pollutants is regarded as a reliable and cost-effective method for the task of prediction. The works reported here involve developing an improved neural network model that combines both the principal component analysis technique and the radial basis function network and forecasts pollutant tendencies based on a recorded database. Compared with general neural network models, the proposed model features a more simple network architecture, a faster training speed, and a more satisfactory prediction performance. The improved model was evaluated with hourly time series of RSP, NOx and NO2 concentrations monitored at the Mong Kok Roadside Gaseous Monitory Station in Hong Kong during the year 2000 and proved to be effective. The model developed is a potential tool for forecasting air quality parameters and is superior to traditional neural network methods.
Wongchai, C; Chaidee, A; Pfeiffer, W
2012-01-01
Global warming increases plant salt stress via evaporation after irrigation, but how plant cells sense salt stress remains unknown. Here, we searched for correlation-based targets of salt stress sensing in Chenopodium rubrum cell suspension cultures. We proposed a linkage between the sensing of salt stress and the sensing of distinct metabolites. Consequently, we analysed various extracellular pH signals in autotroph and heterotroph cell suspensions. Our search included signals after 52 treatments: salt and osmotic stress, ion channel inhibitors (amiloride, quinidine), salt-sensing modulators (proline), amino acids, carboxylic acids and regulators (salicylic acid, 2,4-dichlorphenoxyacetic acid). Multivariate analyses revealed hirarchical clusters of signals and five principal components of extracellular proton flux. The principal component correlated with salt stress was an antagonism of γ-aminobutyric and salicylic acid, confirming involvement of acid-sensing ion channels (ASICs) in salt stress sensing. Proline, short non-substituted mono-carboxylic acids (C2-C6), lactic acid and amiloride characterised the four uncorrelated principal components of proton flux. The proline-associated principal component included an antagonism of 2,4-dichlorphenoxyacetic acid and a set of amino acids (hydrophobic, polar, acidic, basic). The five principal components captured 100% of variance of extracellular proton flux. Thus, a bias-free, functional high-throughput screening was established to extract new clusters of response elements and potential signalling pathways, and to serve as a core for quantitative meta-analysis in plant biology. The eigenvectors reorient research, associating proline with development instead of salt stress, and the proof of existence of multiple components of proton flux can help to resolve controversy about the acid growth theory. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Independent components of neural activity carry information on individual populations.
Głąbska, Helena; Potworowski, Jan; Łęski, Szymon; Wójcik, Daniel K
2014-01-01
Local field potential (LFP), the low-frequency part of the potential recorded extracellularly in the brain, reflects neural activity at the population level. The interpretation of LFP is complicated because it can mix activity from remote cells, on the order of millimeters from the electrode. To understand better the relation between the recordings and the local activity of cells we used a large-scale network thalamocortical model to compute simultaneous LFP, transmembrane currents, and spiking activity. We used this model to study the information contained in independent components obtained from the reconstructed Current Source Density (CSD), which smooths transmembrane currents, decomposed further with Independent Component Analysis (ICA). We found that the three most robust components matched well the activity of two dominating cell populations: superior pyramidal cells in layer 2/3 (rhythmic spiking) and tufted pyramids from layer 5 (intrinsically bursting). The pyramidal population from layer 2/3 could not be well described as a product of spatial profile and temporal activation, but by a sum of two such products which we recovered in two of the ICA components in our analysis, which correspond to the two first principal components of PCA decomposition of layer 2/3 population activity. At low noise one more cell population could be discerned but it is unlikely that it could be recovered in experiment given typical noise ranges.
Independent Components of Neural Activity Carry Information on Individual Populations
Głąbska, Helena; Potworowski, Jan; Łęski, Szymon; Wójcik, Daniel K.
2014-01-01
Local field potential (LFP), the low-frequency part of the potential recorded extracellularly in the brain, reflects neural activity at the population level. The interpretation of LFP is complicated because it can mix activity from remote cells, on the order of millimeters from the electrode. To understand better the relation between the recordings and the local activity of cells we used a large-scale network thalamocortical model to compute simultaneous LFP, transmembrane currents, and spiking activity. We used this model to study the information contained in independent components obtained from the reconstructed Current Source Density (CSD), which smooths transmembrane currents, decomposed further with Independent Component Analysis (ICA). We found that the three most robust components matched well the activity of two dominating cell populations: superior pyramidal cells in layer 2/3 (rhythmic spiking) and tufted pyramids from layer 5 (intrinsically bursting). The pyramidal population from layer 2/3 could not be well described as a product of spatial profile and temporal activation, but by a sum of two such products which we recovered in two of the ICA components in our analysis, which correspond to the two first principal components of PCA decomposition of layer 2/3 population activity. At low noise one more cell population could be discerned but it is unlikely that it could be recovered in experiment given typical noise ranges. PMID:25153730
Pradervand, Sylvain; Maurya, Mano R; Subramaniam, Shankar
2006-01-01
Background Release of immuno-regulatory cytokines and chemokines during inflammatory response is mediated by a complex signaling network. Multiple stimuli produce different signals that generate different cytokine responses. Current knowledge does not provide a complete picture of these signaling pathways. However, using specific markers of signaling pathways, such as signaling proteins, it is possible to develop a 'coarse-grained network' map that can help understand common regulatory modules for various cytokine responses and help differentiate between the causes of their release. Results Using a systematic profiling of signaling responses and cytokine release in RAW 264.7 macrophages made available by the Alliance for Cellular Signaling, an analysis strategy is presented that integrates principal component regression and exhaustive search-based model reduction to identify required signaling factors necessary and sufficient to predict the release of seven cytokines (G-CSF, IL-1α, IL-6, IL-10, MIP-1α, RANTES, and TNFα) in response to selected ligands. This study provides a model-based quantitative estimate of cytokine release and identifies ten signaling components involved in cytokine production. The models identified capture many of the known signaling pathways involved in cytokine release and predict potentially important novel signaling components, like p38 MAPK for G-CSF release, IFNγ- and IL-4-specific pathways for IL-1a release, and an M-CSF-specific pathway for TNFα release. Conclusion Using an integrative approach, we have identified the pathways responsible for the differential regulation of cytokine release in RAW 264.7 macrophages. Our results demonstrate the power of using heterogeneous cellular data to qualitatively and quantitatively map intermediate cellular phenotypes. PMID:16507166
Rapid tooling for functional prototyping of metal mold processes. CRADA final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; Ludtka, G.M.; Bjerke, M.A.
1997-12-01
The overall scope of this endeavor was to develop an integrated computer system, running on a network of heterogeneous computers, that would allow the rapid development of tool designs, and then use process models to determine whether the initial tooling would have characteristics which produce the prototype parts. The major thrust of this program for ORNL was the definition of the requirements for the development of the integrated die design system with the functional purpose to link part design, tool design, and component fabrication through a seamless software environment. The principal product would be a system control program that wouldmore » coordinate the various application programs and implement the data transfer so that any networked workstation would be useable. The overall system control architecture was to be required to easily facilitate any changes, upgrades, or replacements of the model from either the manufacturing end or the design criteria standpoint. The initial design of such a program is described in the section labeled ``Control Program Design``. A critical aspect of this research was the design of the system flow chart showing the exact system components and the data to be transferred. All of the major system components would have been configured to ensure data file compatibility and transferability across the Internet. The intent was to use commercially available packages to model the various manufacturing processes for creating the die and die inserts in addition to modeling the processes for which these parts were to be used. In order to meet all of these requirements, investigative research was conducted to determine the system flow features and software components within the various organizations contributing to this project. This research is summarized.« less
Surzhikov, V D; Surzhikov, D V
2014-01-01
The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.
Priority of VHS Development Based in Potential Area using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Meirawan, D.; Ana, A.; Saripudin, S.
2018-02-01
The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.
Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S
2015-10-09
A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.
Social Media as a Professional Tool
ERIC Educational Resources Information Center
Principal, 2011
2011-01-01
Social networking is more than catching up with family and long-lost friends; it's turned into a professional resource for educators to exchange ideas and expand their professional learning network (PLN). According to the report "School Principals and Social Networking in Education: Practices, Policies, and Realities in 2010," most responding…
Bondi, Mark W; Serody, Adam B; Chan, Agnes S; Eberson-Shumate, Sonja C; Delis, Dean C; Hansen, Lawrence A; Salmon, David P
2002-07-01
The Stroop Color-Word Test (SCWT; C. Golden, 1978) was examined in 59 patients with probable Alzheimer's disease (AD) and in 51 demographically comparable normal control (NC) participants. AD patients produced significantly larger Stroop interference effects than NC participants, and level of dementia severity significantly influenced SCWT performance. Principal-components analyses demonstrated a dissociation in the factor structure of the Stroop trials between NC participants and AD patients, suggesting that disruption of semantic knowledge and speeded verbal processing in AD may be a major contributor to impairment on the incongruent trial. Results of clinicopathologic correlations in an autopsy-confirmed AD subgroup further suggest the invocation of a broad network of integrated cortical regions and executive and language processes underlying successful SCWT performance.
Energy Savings in Cellular Networks Based on Space-Time Structure of Traffic Loads
NASA Astrophysics Data System (ADS)
Sun, Jingbo; Wang, Yue; Yuan, Jian; Shan, Xiuming
Since most of energy consumed by the telecommunication infrastructure is due to the Base Transceiver Station (BTS), switching off BTSs when traffic load is low has been recognized as an effective way of saving energy. In this letter, an energy saving scheme is proposed to minimize the number of active BTSs based on the space-time structure of traffic loads as determined by principal component analysis. Compared to existing methods, our approach models traffic loads more accurately, and has a much smaller input size. As it is implemented in an off-line manner, our scheme also avoids excessive communications and computing overheads. Simulation results show that the proposed method has a comparable performance in energy savings.
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1982-01-01
The present investigation is concerned with an important class of power conditioning networks, taking into account self-oscillating dc-to-square-wave transistor inverters. The considered circuits are widely used both as the principal power converting and processing means in many systems and as low-power analog-to-discrete-time converters for controlling the switching of the output-stage semiconductors in a variety of power conditioning systems. Aspects of piecewise-linear modeling are discussed, taking into consideration component models, and an equivalent-circuit model. Questions of singular point analysis and state plane representation are also investigated, giving attention to limit cycles, starting circuits, the region of attraction, a hard oscillator, and a soft oscillator.
Disciplinary differences of the impact of altmetric.
Ortega, José Luis
2018-04-01
The main objective of this work was to group altmetric indicators according to their relationships and detect disciplinary differences with regard to altmetric impact in a set of 3793 research articles published in 2013. Three of the most representative altmetric providers (Altmetric, PlumX and Crossref Event Data) and Scopus were used to extract information about these publications and their metrics. Principal component analysis was used to summarize the information on these metrics and detect groups of indicators. The results show that these metrics can be grouped into three components: social media, gathering metrics from social networks and online media; usage, including metrics on downloads and views; and citations and saves, grouping metrics related to research impact and saves in bookmarking sites. With regard to disciplinary differences, articles in the General category attract more attention from social media, Social Sciences articles have higher usage than Physical Sciences, and General articles are more cited and saved than Health Sciences and Social Sciences articles.
ERIC Educational Resources Information Center
National Association of Secondary School Principals, Reston, VA.
Preparation programs for principals should have excellent academic and performance based components. In examining the nature of performance based principal preparation this report finds that school administration programs must bridge the gap between conceptual learning in the classroom and the requirements of professional practice. A number of…
Principal component greenness transformation in multitemporal agricultural Landsat data
NASA Technical Reports Server (NTRS)
Abotteen, R. A.
1978-01-01
A data compression technique for multitemporal Landsat imagery which extracts phenological growth pattern information for agricultural crops is described. The principal component greenness transformation was applied to multitemporal agricultural Landsat data for information retrieval. The transformation was favorable for applications in agricultural Landsat data analysis because of its physical interpretability and its relation to the phenological growth of crops. It was also found that the first and second greenness eigenvector components define a temporal small-grain trajectory and nonsmall-grain trajectory, respectively.