Sample records for bayesian graph clustering

  1. Quantitative comparison of alternative methods for coarse-graining biological networks

    PubMed Central

    Bowman, Gregory R.; Meng, Luming; Huang, Xuhui

    2013-01-01

    Markov models and master equations are a powerful means of modeling dynamic processes like protein conformational changes. However, these models are often difficult to understand because of the enormous number of components and connections between them. Therefore, a variety of methods have been developed to facilitate understanding by coarse-graining these complex models. Here, we employ Bayesian model comparison to determine which of these coarse-graining methods provides the models that are most faithful to the original set of states. We find that the Bayesian agglomerative clustering engine and the hierarchical Nyström expansion graph (HNEG) typically provide the best performance. Surprisingly, the original Perron cluster cluster analysis (PCCA) method often provides the next best results, outperforming the newer PCCA+ method and the most probable paths algorithm. We also show that the differences between the models are qualitatively significant, rather than being minor shifts in the boundaries between states. The performance of the methods correlates well with the entropy of the resulting coarse-grainings, suggesting that finding states with more similar populations (i.e., avoiding low population states that may just be noise) gives better results. PMID:24089717

  2. Understanding the Scalability of Bayesian Network Inference Using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.

    2010-01-01

    One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation. The clique tree approach consists of propagation in a clique tree compiled from a Bayesian network, and while it was introduced in the 1980s, there is still a lack of understanding of how clique tree computation time depends on variations in BN size and structure. In this article, we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN s non-root nodes to the number of root nodes, and (ii) the expected number of moral edges in their moral graphs. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for the total size of each set. For the special case of bipartite BNs, there are two sets and two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, where random bipartite BNs generated using the BPART algorithm are studied, we systematically increase the out-degree of the root nodes in bipartite Bayesian networks, by increasing the number of leaf nodes. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. We believe that this research improves the understanding of the scaling behavior of clique tree clustering for a certain class of Bayesian networks; presents an aid for trade-off studies of clique tree clustering using growth curves; and ultimately provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms.

  3. Bayesian exponential random graph modelling of interhospital patient referral networks.

    PubMed

    Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro

    2017-08-15

    Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. A Weight-Adaptive Laplacian Embedding for Graph-Based Clustering.

    PubMed

    Cheng, De; Nie, Feiping; Sun, Jiande; Gong, Yihong

    2017-07-01

    Graph-based clustering methods perform clustering on a fixed input data graph. Thus such clustering results are sensitive to the particular graph construction. If this initial construction is of low quality, the resulting clustering may also be of low quality. We address this drawback by allowing the data graph itself to be adaptively adjusted in the clustering procedure. In particular, our proposed weight adaptive Laplacian (WAL) method learns a new data similarity matrix that can adaptively adjust the initial graph according to the similarity weight in the input data graph. We develop three versions of these methods based on the L2-norm, fuzzy entropy regularizer, and another exponential-based weight strategy, that yield three new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic data sets and real-world benchmark data sets exhibit the effectiveness of these new graph-based clustering methods.

  5. A local search for a graph clustering problem

    NASA Astrophysics Data System (ADS)

    Navrotskaya, Anna; Il'ev, Victor

    2016-10-01

    In the clustering problems one has to partition a given set of objects (a data set) into some subsets (called clusters) taking into consideration only similarity of the objects. One of most visual formalizations of clustering is graph clustering, that is grouping the vertices of a graph into clusters taking into consideration the edge structure of the graph whose vertices are objects and edges represent similarities between the objects. In the graph k-clustering problem the number of clusters does not exceed k and the goal is to minimize the number of edges between clusters and the number of missing edges within clusters. This problem is NP-hard for any k ≥ 2. We propose a polynomial time (2k-1)-approximation algorithm for graph k-clustering. Then we apply a local search procedure to the feasible solution found by this algorithm and hold experimental research of obtained heuristics.

  6. Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob

    2009-01-01

    Bayesian networks (BNs) are used to represent and efficiently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NP-hard or worse to answer, and it is not hard to find application BNs where the clique tree approach in practice cannot be used. On the other hand, it is well-known that tree-structured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN's non-root nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and find that root clique growth is well-approximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical trade-off studies of clique tree clustering using growth curves.

  7. Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad

    2016-02-01

    Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less

  8. Key-Node-Separated Graph Clustering and Layouts for Human Relationship Graph Visualization.

    PubMed

    Itoh, Takayuki; Klein, Karsten

    2015-01-01

    Many graph-drawing methods apply node-clustering techniques based on the density of edges to find tightly connected subgraphs and then hierarchically visualize the clustered graphs. However, users may want to focus on important nodes and their connections to groups of other nodes for some applications. For this purpose, it is effective to separately visualize the key nodes detected based on adjacency and attributes of the nodes. This article presents a graph visualization technique for attribute-embedded graphs that applies a graph-clustering algorithm that accounts for the combination of connections and attributes. The graph clustering step divides the nodes according to the commonality of connected nodes and similarity of feature value vectors. It then calculates the distances between arbitrary pairs of clusters according to the number of connecting edges and the similarity of feature value vectors and finally places the clusters based on the distances. Consequently, the technique separates important nodes that have connections to multiple large clusters and improves the visibility of such nodes' connections. To test this technique, this article presents examples with human relationship graph datasets, including a coauthorship and Twitter communication network dataset.

  9. Clustering execution in a processing system to increase power savings

    DOEpatents

    Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.; Vega, Augusto J.

    2018-03-20

    Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling the tasks.

  10. Some Applications of Graph Theory to Clustering

    ERIC Educational Resources Information Center

    Hubert, Lawrence J.

    1974-01-01

    The connection between graph theory and clustering is reviewed and extended. Major emphasis is on restating, in a graph-theoretic context, selected past work in clustering, and conversely, developing alternative strategies from several standard concepts used in graph theory per se. (Author/RC)

  11. Clustering execution in a processing system to increase power savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.

    Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling themore » tasks.« less

  12. Bayesian Decision Theoretical Framework for Clustering

    ERIC Educational Resources Information Center

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  13. Adaptation of Chain Event Graphs for use with Case-Control Studies in Epidemiology.

    PubMed

    Keeble, Claire; Thwaites, Peter Adam; Barber, Stuart; Law, Graham Richard; Baxter, Paul David

    2017-09-26

    Case-control studies are used in epidemiology to try to uncover the causes of diseases, but are a retrospective study design known to suffer from non-participation and recall bias, which may explain their decreased popularity in recent years. Traditional analyses report usually only the odds ratio for given exposures and the binary disease status. Chain event graphs are a graphical representation of a statistical model derived from event trees which have been developed in artificial intelligence and statistics, and only recently introduced to the epidemiology literature. They are a modern Bayesian technique which enable prior knowledge to be incorporated into the data analysis using the agglomerative hierarchical clustering algorithm, used to form a suitable chain event graph. Additionally, they can account for missing data and be used to explore missingness mechanisms. Here we adapt the chain event graph framework to suit scenarios often encountered in case-control studies, to strengthen this study design which is time and financially efficient. We demonstrate eight adaptations to the graphs, which consist of two suitable for full case-control study analysis, four which can be used in interim analyses to explore biases, and two which aim to improve the ease and accuracy of analyses. The adaptations are illustrated with complete, reproducible, fully-interpreted examples, including the event tree and chain event graph. Chain event graphs are used here for the first time to summarise non-participation, data collection techniques, data reliability, and disease severity in case-control studies. We demonstrate how these features of a case-control study can be incorporated into the analysis to provide further insight, which can help to identify potential biases and lead to more accurate study results.

  14. Subspace Clustering via Learning an Adaptive Low-Rank Graph.

    PubMed

    Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin

    2018-08-01

    By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.

  15. A genetic graph-based approach for partitional clustering.

    PubMed

    Menéndez, Héctor D; Barrero, David F; Camacho, David

    2014-05-01

    Clustering is one of the most versatile tools for data analysis. In the recent years, clustering that seeks the continuity of data (in opposition to classical centroid-based approaches) has attracted an increasing research interest. It is a challenging problem with a remarkable practical interest. The most popular continuity clustering method is the spectral clustering (SC) algorithm, which is based on graph cut: It initially generates a similarity graph using a distance measure and then studies its graph spectrum to find the best cut. This approach is sensitive to the parameters of the metric, and a correct parameter choice is critical to the quality of the cluster. This work proposes a new algorithm, inspired by SC, that reduces the parameter dependency while maintaining the quality of the solution. The new algorithm, named genetic graph-based clustering (GGC), takes an evolutionary approach introducing a genetic algorithm (GA) to cluster the similarity graph. The experimental validation shows that GGC increases robustness of SC and has competitive performance in comparison with classical clustering methods, at least, in the synthetic and real dataset used in the experiments.

  16. Weighted graph cuts without eigenvectors a multilevel approach.

    PubMed

    Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian

    2007-11-01

    A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.

  17. Local Higher-Order Graph Clustering

    PubMed Central

    Yin, Hao; Benson, Austin R.; Leskovec, Jure; Gleich, David F.

    2018-01-01

    Local graph clustering methods aim to find a cluster of nodes by exploring a small region of the graph. These methods are attractive because they enable targeted clustering around a given seed node and are faster than traditional global graph clustering methods because their runtime does not depend on the size of the input graph. However, current local graph partitioning methods are not designed to account for the higher-order structures crucial to the network, nor can they effectively handle directed networks. Here we introduce a new class of local graph clustering methods that address these issues by incorporating higher-order network information captured by small subgraphs, also called network motifs. We develop the Motif-based Approximate Personalized PageRank (MAPPR) algorithm that finds clusters containing a seed node with minimal motif conductance, a generalization of the conductance metric for network motifs. We generalize existing theory to prove the fast running time (independent of the size of the graph) and obtain theoretical guarantees on the cluster quality (in terms of motif conductance). We also develop a theory of node neighborhoods for finding sets that have small motif conductance, and apply these results to the case of finding good seed nodes to use as input to the MAPPR algorithm. Experimental validation on community detection tasks in both synthetic and real-world networks, shows that our new framework MAPPR outperforms the current edge-based personalized PageRank methodology. PMID:29770258

  18. On the modification Highly Connected Subgraphs (HCS) algorithm in graph clustering for weighted graph

    NASA Astrophysics Data System (ADS)

    Albirri, E. R.; Sugeng, K. A.; Aldila, D.

    2018-04-01

    Nowadays, in the modern world, since technology and human civilization start to progress, all city in the world is almost connected. The various places in this world are easier to visit. It is an impact of transportation technology and highway construction. The cities which have been connected can be represented by graph. Graph clustering is one of ways which is used to answer some problems represented by graph. There are some methods in graph clustering to solve the problem spesifically. One of them is Highly Connected Subgraphs (HCS) method. HCS is used to identify cluster based on the graph connectivity k for graph G. The connectivity in graph G is denoted by k(G)> \\frac{n}{2} that n is the total of vertices in G, then it is called as HCS or the cluster. This research used literature review and completed with simulation of program in a software. We modified HCS algorithm by using weighted graph. The modification is located in the Process Phase. Process Phase is used to cut the connected graph G into two subgraphs H and \\bar{H}. We also made a program by using software Octave-401. Then we applied the data of Flight Routes Mapping of One of Airlines in Indonesia to our program.

  19. Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.

    PubMed

    Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T

    2013-01-01

    Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.

  20. Graph-Based Object Class Discovery

    NASA Astrophysics Data System (ADS)

    Xia, Shengping; Hancock, Edwin R.

    We are interested in the problem of discovering the set of object classes present in a database of images using a weakly supervised graph-based framework. Rather than making use of the ”Bag-of-Features (BoF)” approach widely used in current work on object recognition, we represent each image by a graph using a group of selected local invariant features. Using local feature matching and iterative Procrustes alignment, we perform graph matching and compute a similarity measure. Borrowing the idea of query expansion , we develop a similarity propagation based graph clustering (SPGC) method. Using this method class specific clusters of the graphs can be obtained. Such a cluster can be generally represented by using a higher level graph model whose vertices are the clustered graphs, and the edge weights are determined by the pairwise similarity measure. Experiments are performed on a dataset, in which the number of images increases from 1 to 50K and the number of objects increases from 1 to over 500. Some objects have been discovered with total recall and a precision 1 in a single cluster.

  1. Co-clustering directed graphs to discover asymmetries and directional communities

    PubMed Central

    Rohe, Karl; Qin, Tai; Yu, Bin

    2016-01-01

    In directed graphs, relationships are asymmetric and these asymmetries contain essential structural information about the graph. Directed relationships lead to a new type of clustering that is not feasible in undirected graphs. We propose a spectral co-clustering algorithm called di-sim for asymmetry discovery and directional clustering. A Stochastic co-Blockmodel is introduced to show favorable properties of di-sim. To account for the sparse and highly heterogeneous nature of directed networks, di-sim uses the regularized graph Laplacian and projects the rows of the eigenvector matrix onto the sphere. A nodewise asymmetry score and di-sim are used to analyze the clustering asymmetries in the networks of Enron emails, political blogs, and the Caenorhabditis elegans chemical connectome. In each example, a subset of nodes have clustering asymmetries; these nodes send edges to one cluster, but receive edges from another cluster. Such nodes yield insightful information (e.g., communication bottlenecks) about directed networks, but are missed if the analysis ignores edge direction. PMID:27791058

  2. Co-clustering directed graphs to discover asymmetries and directional communities.

    PubMed

    Rohe, Karl; Qin, Tai; Yu, Bin

    2016-10-21

    In directed graphs, relationships are asymmetric and these asymmetries contain essential structural information about the graph. Directed relationships lead to a new type of clustering that is not feasible in undirected graphs. We propose a spectral co-clustering algorithm called di-sim for asymmetry discovery and directional clustering. A Stochastic co-Blockmodel is introduced to show favorable properties of di-sim To account for the sparse and highly heterogeneous nature of directed networks, di-sim uses the regularized graph Laplacian and projects the rows of the eigenvector matrix onto the sphere. A nodewise asymmetry score and di-sim are used to analyze the clustering asymmetries in the networks of Enron emails, political blogs, and the Caenorhabditis elegans chemical connectome. In each example, a subset of nodes have clustering asymmetries; these nodes send edges to one cluster, but receive edges from another cluster. Such nodes yield insightful information (e.g., communication bottlenecks) about directed networks, but are missed if the analysis ignores edge direction.

  3. A proximity-based graph clustering method for the identification and application of transcription factor clusters.

    PubMed

    Spadafore, Maxwell; Najarian, Kayvan; Boyle, Alan P

    2017-11-29

    Transcription factors (TFs) form a complex regulatory network within the cell that is crucial to cell functioning and human health. While methods to establish where a TF binds to DNA are well established, these methods provide no information describing how TFs interact with one another when they do bind. TFs tend to bind the genome in clusters, and current methods to identify these clusters are either limited in scope, unable to detect relationships beyond motif similarity, or not applied to TF-TF interactions. Here, we present a proximity-based graph clustering approach to identify TF clusters using either ChIP-seq or motif search data. We use TF co-occurrence to construct a filtered, normalized adjacency matrix and use the Markov Clustering Algorithm to partition the graph while maintaining TF-cluster and cluster-cluster interactions. We then apply our graph structure beyond clustering, using it to increase the accuracy of motif-based TFBS searching for an example TF. We show that our method produces small, manageable clusters that encapsulate many known, experimentally validated transcription factor interactions and that our method is capable of capturing interactions that motif similarity methods might miss. Our graph structure is able to significantly increase the accuracy of motif TFBS searching, demonstrating that the TF-TF connections within the graph correlate with biological TF-TF interactions. The interactions identified by our method correspond to biological reality and allow for fast exploration of TF clustering and regulatory dynamics.

  4. Learning In networks

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1995-01-01

    Intelligent systems require software incorporating probabilistic reasoning, and often times learning. Networks provide a framework and methodology for creating this kind of software. This paper introduces network models based on chain graphs with deterministic nodes. Chain graphs are defined as a hierarchical combination of Bayesian and Markov networks. To model learning, plates on chain graphs are introduced to model independent samples. The paper concludes by discussing various operations that can be performed on chain graphs with plates as a simplification process or to generate learning algorithms.

  5. Functional grouping of similar genes using eigenanalysis on minimum spanning tree based neighborhood graph.

    PubMed

    Jothi, R; Mohanty, Sraban Kumar; Ojha, Aparajita

    2016-04-01

    Gene expression data clustering is an important biological process in DNA microarray analysis. Although there have been many clustering algorithms for gene expression analysis, finding a suitable and effective clustering algorithm is always a challenging problem due to the heterogeneous nature of gene profiles. Minimum Spanning Tree (MST) based clustering algorithms have been successfully employed to detect clusters of varying shapes and sizes. This paper proposes a novel clustering algorithm using Eigenanalysis on Minimum Spanning Tree based neighborhood graph (E-MST). As MST of a set of points reflects the similarity of the points with their neighborhood, the proposed algorithm employs a similarity graph obtained from k(') rounds of MST (k(')-MST neighborhood graph). By studying the spectral properties of the similarity matrix obtained from k(')-MST graph, the proposed algorithm achieves improved clustering results. We demonstrate the efficacy of the proposed algorithm on 12 gene expression datasets. Experimental results show that the proposed algorithm performs better than the standard clustering algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Beyond Low-Rank Representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    PubMed

    Wang, Yang; Wu, Lin

    2018-07-01

    Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. A clustering-based graph Laplacian framework for value function approximation in reinforcement learning.

    PubMed

    Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold

    2014-12-01

    In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.

  8. Automatic classification of protein structures relying on similarities between alignments

    PubMed Central

    2012-01-01

    Background Identification of protein structural cores requires isolation of sets of proteins all sharing a same subset of structural motifs. In the context of an ever growing number of available 3D protein structures, standard and automatic clustering algorithms require adaptations so as to allow for efficient identification of such sets of proteins. Results When considering a pair of 3D structures, they are stated as similar or not according to the local similarities of their matching substructures in a structural alignment. This binary relation can be represented in a graph of similarities where a node represents a 3D protein structure and an edge states that two 3D protein structures are similar. Therefore, classifying proteins into structural families can be viewed as a graph clustering task. Unfortunately, because such a graph encodes only pairwise similarity information, clustering algorithms may include in the same cluster a subset of 3D structures that do not share a common substructure. In order to overcome this drawback we first define a ternary similarity on a triple of 3D structures as a constraint to be satisfied by the graph of similarities. Such a ternary constraint takes into account similarities between pairwise alignments, so as to ensure that the three involved protein structures do have some common substructure. We propose hereunder a modification algorithm that eliminates edges from the original graph of similarities and gives a reduced graph in which no ternary constraints are violated. Our approach is then first to build a graph of similarities, then to reduce the graph according to the modification algorithm, and finally to apply to the reduced graph a standard graph clustering algorithm. Such method was used for classifying ASTRAL-40 non-redundant protein domains, identifying significant pairwise similarities with Yakusa, a program devised for rapid 3D structure alignments. Conclusions We show that filtering similarities prior to standard graph based clustering process by applying ternary similarity constraints i) improves the separation of proteins of different classes and consequently ii) improves the classification quality of standard graph based clustering algorithms according to the reference classification SCOP. PMID:22974051

  9. Brain white matter fiber estimation and tractography using Q-ball imaging and Bayesian MODEL.

    PubMed

    Lu, Meng

    2015-01-01

    Diffusion tensor imaging allows for the non-invasive in vivo mapping of the brain tractography. However, fiber bundles have complex structures such as fiber crossings, fiber branchings and fibers with large curvatures that tensor imaging (DTI) cannot accurately handle. This study presents a novel brain white matter tractography method using Q-ball imaging as the data source instead of DTI, because QBI can provide accurate information about multiple fiber crossings and branchings in a single voxel using an orientation distribution function (ODF). The presented method also uses graph theory to construct the Bayesian model-based graph, so that the fiber tracking between two voxels can be represented as the shortest path in a graph. Our experiment showed that our new method can accurately handle brain white matter fiber crossings and branchings, and reconstruct brain tractograhpy both in phantom data and real brain data.

  10. Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.

    PubMed

    Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin

    2017-04-01

    Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.

  11. Risk Assessment for Mobile Systems Through a Multilayered Hierarchical Bayesian Network.

    PubMed

    Li, Shancang; Tryfonas, Theo; Russell, Gordon; Andriotis, Panagiotis

    2016-08-01

    Mobile systems are facing a number of application vulnerabilities that can be combined together and utilized to penetrate systems with devastating impact. When assessing the overall security of a mobile system, it is important to assess the security risks posed by each mobile applications (apps), thus gaining a stronger understanding of any vulnerabilities present. This paper aims at developing a three-layer framework that assesses the potential risks which apps introduce within the Android mobile systems. A Bayesian risk graphical model is proposed to evaluate risk propagation in a layered risk architecture. By integrating static analysis, dynamic analysis, and behavior analysis in a hierarchical framework, the risks and their propagation through each layer are well modeled by the Bayesian risk graph, which can quantitatively analyze risks faced to both apps and mobile systems. The proposed hierarchical Bayesian risk graph model offers a novel way to investigate the security risks in mobile environment and enables users and administrators to evaluate the potential risks. This strategy allows to strengthen both app security as well as the security of the entire system.

  12. Exploratory Item Classification Via Spectral Graph Clustering

    PubMed Central

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2017-01-01

    Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476

  13. NeAT: a toolbox for the analysis of biological networks, clusters, classes and pathways.

    PubMed

    Brohée, Sylvain; Faust, Karoline; Lima-Mendez, Gipsi; Sand, Olivier; Janky, Rekin's; Vanderstocken, Gilles; Deville, Yves; van Helden, Jacques

    2008-07-01

    The network analysis tools (NeAT) (http://rsat.ulb.ac.be/neat/) provide a user-friendly web access to a collection of modular tools for the analysis of networks (graphs) and clusters (e.g. microarray clusters, functional classes, etc.). A first set of tools supports basic operations on graphs (comparison between two graphs, neighborhood of a set of input nodes, path finding and graph randomization). Another set of programs makes the connection between networks and clusters (graph-based clustering, cliques discovery and mapping of clusters onto a network). The toolbox also includes programs for detecting significant intersections between clusters/classes (e.g. clusters of co-expression versus functional classes of genes). NeAT are designed to cope with large datasets and provide a flexible toolbox for analyzing biological networks stored in various databases (protein interactions, regulation and metabolism) or obtained from high-throughput experiments (two-hybrid, mass-spectrometry and microarrays). The web interface interconnects the programs in predefined analysis flows, enabling to address a series of questions about networks of interest. Each tool can also be used separately by entering custom data for a specific analysis. NeAT can also be used as web services (SOAP/WSDL interface), in order to design programmatic workflows and integrate them with other available resources.

  14. Library fingerprints: a novel approach to the screening of virtual libraries.

    PubMed

    Klon, Anthony E; Diller, David J

    2007-01-01

    We propose a novel method to prioritize libraries for combinatorial synthesis and high-throughput screening that assesses the viability of a particular library on the basis of the aggregate physical-chemical properties of the compounds using a naïve Bayesian classifier. This approach prioritizes collections of related compounds according to the aggregate values of their physical-chemical parameters in contrast to single-compound screening. The method is also shown to be useful in screening existing noncombinatorial libraries when the compounds in these libraries have been previously clustered according to their molecular graphs. We show that the method used here is comparable or superior to the single-compound virtual screening of combinatorial libraries and noncombinatorial libraries and is superior to the pairwise Tanimoto similarity searching of a collection of combinatorial libraries.

  15. Accelerating semantic graph databases on commodity clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Castellana, Vito G.; Haglin, David J.

    We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.

  16. Graph partitions and cluster synchronization in networks of oscillators

    PubMed Central

    Schaub, Michael T.; O’Clery, Neave; Billeh, Yazan N.; Delvenne, Jean-Charles; Lambiotte, Renaud; Barahona, Mauricio

    2017-01-01

    Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators. PMID:27781454

  17. EClerize: A customized force-directed graph drawing algorithm for biological graphs with EC attributes.

    PubMed

    Danaci, Hasan Fehmi; Cetin-Atalay, Rengul; Atalay, Volkan

    2018-03-26

    Visualizing large-scale data produced by the high throughput experiments as a biological graph leads to better understanding and analysis. This study describes a customized force-directed layout algorithm, EClerize, for biological graphs that represent pathways in which the nodes are associated with Enzyme Commission (EC) attributes. The nodes with the same EC class numbers are treated as members of the same cluster. Positions of nodes are then determined based on both the biological similarity and the connection structure. EClerize minimizes the intra-cluster distance, that is the distance between the nodes of the same EC cluster and maximizes the inter-cluster distance, that is the distance between two distinct EC clusters. EClerize is tested on a number of biological pathways and the improvement brought in is presented with respect to the original algorithm. EClerize is available as a plug-in to cytoscape ( http://apps.cytoscape.org/apps/eclerize ).

  18. Locating sources within a dense sensor array using graph clustering

    NASA Astrophysics Data System (ADS)

    Gerstoft, P.; Riahi, N.

    2017-12-01

    We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.

  19. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.

    PubMed

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2013-10-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  20. A Hierarchical Bayesian Procedure for Two-Mode Cluster Analysis

    ERIC Educational Resources Information Center

    DeSarbo, Wayne S.; Fong, Duncan K. H.; Liechty, John; Saxton, M. Kim

    2004-01-01

    This manuscript introduces a new Bayesian finite mixture methodology for the joint clustering of row and column stimuli/objects associated with two-mode asymmetric proximity, dominance, or profile data. That is, common clusters are derived which partition both the row and column stimuli/objects simultaneously into the same derived set of clusters.…

  1. Scaling Semantic Graph Databases in Size and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Castellana, Vito G.; Villa, Oreste

    In this paper we present SGEM, a full software system for accelerating large-scale semantic graph databases on commodity clusters. Unlike current approaches, SGEM addresses semantic graph databases by only employing graph methods at all the levels of the stack. On one hand, this allows exploiting the space efficiency of graph data structures and the inherent parallelism of graph algorithms. These features adapt well to the increasing system memory and core counts of modern commodity clusters. On the other hand, however, these systems are optimized for regular computation and batched data transfers, while graph methods usually are irregular and generate fine-grainedmore » data accesses with poor spatial and temporal locality. Our framework comprises a SPARQL to data parallel C compiler, a library of parallel graph methods and a custom, multithreaded runtime system. We introduce our stack, motivate its advantages with respect to other solutions and show how we solved the challenges posed by irregular behaviors. We present the result of our software stack on the Berlin SPARQL benchmarks with datasets up to 10 billion triples (a triple corresponds to a graph edge), demonstrating scaling in dataset size and in performance as more nodes are added to the cluster.« less

  2. A Novel Clustering Methodology Based on Modularity Optimisation for Detecting Authorship Affinities in Shakespearean Era Plays

    PubMed Central

    Craig, Hugh; Berretta, Regina; Moscato, Pablo

    2016-01-01

    In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets. This new, efficient methodology converts the general clustering problem into the community detection problem in graph by using the Jensen-Shannon distance, a dissimilarity measure originating in Information Theory. Moreover, we use graph theoretic concepts for the generation and analysis of proximity graphs. Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works. To test the effectiveness of this general methodology, we apply it to a text corpus dataset, which contains frequencies of approximately 55,114 unique words across all 168 written in the Shakespearean era (16th and 17th centuries), to analyze and detect clusters of similar plays. Experimental results and comparison with state-of-the-art clustering methods demonstrate the remarkable performance of our new method for identifying high quality clusters which reflect the commonalities in the literary style of the plays. PMID:27571416

  3. PuReD-MCL: a graph-based PubMed document clustering methodology.

    PubMed

    Theodosiou, T; Darzentas, N; Angelis, L; Ouzounis, C A

    2008-09-01

    Biomedical literature is the principal repository of biomedical knowledge, with PubMed being the most complete database collecting, organizing and analyzing such textual knowledge. There are numerous efforts that attempt to exploit this information by using text mining and machine learning techniques. We developed a novel approach, called PuReD-MCL (Pubmed Related Documents-MCL), which is based on the graph clustering algorithm MCL and relevant resources from PubMed. PuReD-MCL avoids using natural language processing (NLP) techniques directly; instead, it takes advantage of existing resources, available from PubMed. PuReD-MCL then clusters documents efficiently using the MCL graph clustering algorithm, which is based on graph flow simulation. This process allows users to analyse the results by highlighting important clues, and finally to visualize the clusters and all relevant information using an interactive graph layout algorithm, for instance BioLayout Express 3D. The methodology was applied to two different datasets, previously used for the validation of the document clustering tool TextQuest. The first dataset involves the organisms Escherichia coli and yeast, whereas the second is related to Drosophila development. PuReD-MCL successfully reproduces the annotated results obtained from TextQuest, while at the same time provides additional insights into the clusters and the corresponding documents. Source code in perl and R are available from http://tartara.csd.auth.gr/~theodos/

  4. An effective trust-based recommendation method using a novel graph clustering algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin

    2015-10-01

    Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.

  5. PANDA: Protein function prediction using domain architecture and affinity propagation.

    PubMed

    Wang, Zheng; Zhao, Chenguang; Wang, Yiheng; Sun, Zheng; Wang, Nan

    2018-02-22

    We developed PANDA (Propagation of Affinity and Domain Architecture) to predict protein functions in the format of Gene Ontology (GO) terms. PANDA at first executes profile-profile alignment algorithm to search against PfamA, KOG, COG, and SwissProt databases, and then launches PSI-BLAST against UniProt for homologue search. PANDA integrates a domain architecture inference algorithm based on the Bayesian statistics that calculates the probability of having a GO term. All the candidate GO terms are pooled and filtered based on Z-score. After that, the remaining GO terms are clustered using an affinity propagation algorithm based on the GO directed acyclic graph, followed by a second round of filtering on the clusters of GO terms. We benchmarked the performance of all the baseline predictors PANDA integrates and also for every pooling and filtering step of PANDA. It can be found that PANDA achieves better performances in terms of area under the curve for precision and recall compared to the baseline predictors. PANDA can be accessed from http://dna.cs.miami.edu/PANDA/ .

  6. Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs

    NASA Astrophysics Data System (ADS)

    van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.

    2018-04-01

    Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.

  7. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  8. Prediction in Health Domain Using Bayesian Networks Optimization Based on Induction Learning Techniques

    NASA Astrophysics Data System (ADS)

    Felgaer, Pablo; Britos, Paola; García-Martínez, Ramón

    A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.

  9. Quantum Graphical Models and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leifer, M.S.; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo Ont., N2L 2Y5; Poulin, D.

    Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markovmore » Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.« less

  10. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    USGS Publications Warehouse

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  11. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  12. Slicing cluster mass functions with a Bayesian razor

    NASA Astrophysics Data System (ADS)

    Sealfon, C. D.

    2010-08-01

    We apply a Bayesian ``razor" to forecast Bayes factors between different parameterizations of the galaxy cluster mass function. To demonstrate this approach, we calculate the minimum size N-body simulation needed for strong evidence favoring a two-parameter mass function over one-parameter mass functions and visa versa, as a function of the minimum cluster mass.

  13. A Scalable Distributed Syntactic, Semantic, and Lexical Language Model

    DTIC Science & Technology

    2012-09-01

    Here pa(τ) denotes the set of parent states of τ. If the recursive factorization refers to a graph , then we have a Bayesian network (Lauritzen 1996...Broadly speaking, however, the recursive factorization can refer to a representation more complicated than a graph with a fixed set of nodes and edges...factored language (FL) model (Bilmes and Kirchhoff 2003) is close to the smoothing technique we propose here, the major difference is that FL

  14. A Bayesian method for inferring transmission chains in a partially observed epidemic.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Ray, Jaideep

    2008-10-01

    We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historicalmore » data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.« less

  15. A novel alignment-free method to classify protein folding types by combining spectral graph clustering with Chou's pseudo amino acid composition.

    PubMed

    Tripathi, Pooja; Pandey, Paras N

    2017-07-07

    The present work employs pseudo amino acid composition (PseAAC) for encoding the protein sequences in their numeric form. Later this will be arranged in the similarity matrix, which serves as input for spectral graph clustering method. Spectral methods are used previously also for clustering of protein sequences, but they uses pair wise alignment scores of protein sequences, in similarity matrix. The alignment score depends on the length of sequences, so clustering short and long sequences together may not good idea. Therefore the idea of introducing PseAAC with spectral clustering algorithm came into scene. We extensively tested our method and compared its performance with other existing machine learning methods. It is consistently observed that, the number of clusters that we obtained for a given set of proteins is close to the number of superfamilies in that set and PseAAC combined with spectral graph clustering shows the best classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications.

    PubMed

    Karyotis, Vasileios; Tsitseklis, Konstantinos; Sotiropoulos, Konstantinos; Papavassiliou, Symeon

    2018-04-15

    In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan-Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing.

  17. Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications

    PubMed Central

    Sotiropoulos, Konstantinos

    2018-01-01

    In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan–Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing. PMID:29662043

  18. Graph-based analysis of kinetics on multidimensional potential-energy surfaces.

    PubMed

    Okushima, T; Niiyama, T; Ikeda, K S; Shimizu, Y

    2009-09-01

    The aim of this paper is twofold: one is to give a detailed description of an alternative graph-based analysis method, which we call saddle connectivity graph, for analyzing the global topography and the dynamical properties of many-dimensional potential-energy landscapes and the other is to give examples of applications of this method in the analysis of the kinetics of realistic systems. A Dijkstra-type shortest path algorithm is proposed to extract dynamically dominant transition pathways by kinetically defining transition costs. The applicability of this approach is first confirmed by an illustrative example of a low-dimensional random potential. We then show that a coarse-graining procedure tailored for saddle connectivity graphs can be used to obtain the kinetic properties of 13- and 38-atom Lennard-Jones clusters. The coarse-graining method not only reduces the complexity of the graphs, but also, with iterative use, reveals a self-similar hierarchical structure in these clusters. We also propose that the self-similarity is common to many-atom Lennard-Jones clusters.

  19. A Random Walk Approach to Query Informative Constraints for Clustering.

    PubMed

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  20. GraphCrunch 2: Software tool for network modeling, alignment and clustering.

    PubMed

    Kuchaiev, Oleksii; Stevanović, Aleksandar; Hayes, Wayne; Pržulj, Nataša

    2011-01-19

    Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, GraphCruch 2 implements an algorithm for clustering nodes within a network based solely on their topological similarities. Using GraphCrunch 2, we demonstrate that eukaryotic and viral PPI networks may belong to different graph model families and show that topology-based clustering can reveal important functional similarities between proteins within yeast and human PPI networks. GraphCrunch 2 is a software tool that implements the latest research on biological network analysis. It parallelizes computationally intensive tasks to fully utilize the potential of modern multi-core CPUs. It is open-source and freely available for research use. It runs under the Windows and Linux platforms.

  1. The improved business valuation model for RFID company based on the community mining method.

    PubMed

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.

  2. The improved business valuation model for RFID company based on the community mining method

    PubMed Central

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company’s net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies. PMID:28459815

  3. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  4. Clustering in complex directed networks

    NASA Astrophysics Data System (ADS)

    Fagiolo, Giorgio

    2007-08-01

    Many empirical networks display an inherent tendency to cluster, i.e., to form circles of connected nodes. This feature is typically measured by the clustering coefficient (CC). The CC, originally introduced for binary, undirected graphs, has been recently generalized to weighted, undirected networks. Here we extend the CC to the case of (binary and weighted) directed networks and we compute its expected value for random graphs. We distinguish between CCs that count all directed triangles in the graph (independently of the direction of their edges) and CCs that only consider particular types of directed triangles (e.g., cycles). The main concepts are illustrated by employing empirical data on world-trade flows.

  5. Spectral-clustering approach to Lagrangian vortex detection.

    PubMed

    Hadjighasem, Alireza; Karrasch, Daniel; Teramoto, Hiroshi; Haller, George

    2016-06-01

    One of the ubiquitous features of real-life turbulent flows is the existence and persistence of coherent vortices. Here we show that such coherent vortices can be extracted as clusters of Lagrangian trajectories. We carry out the clustering on a weighted graph, with the weights measuring pairwise distances of fluid trajectories in the extended phase space of positions and time. We then extract coherent vortices from the graph using tools from spectral graph theory. Our method locates all coherent vortices in the flow simultaneously, thereby showing high potential for automated vortex tracking. We illustrate the performance of this technique by identifying coherent Lagrangian vortices in several two- and three-dimensional flows.

  6. A Bayesian hierarchical model for mortality data from cluster-sampling household surveys in humanitarian crises.

    PubMed

    Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko

    2018-05-31

    The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.

  7. An efficient and scalable graph modeling approach for capturing information at different levels in next generation sequencing reads

    PubMed Central

    2013-01-01

    Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333

  8. Applications of graph theory in protein structure identification

    PubMed Central

    2011-01-01

    There is a growing interest in the identification of proteins on the proteome wide scale. Among different kinds of protein structure identification methods, graph-theoretic methods are very sharp ones. Due to their lower costs, higher effectiveness and many other advantages, they have drawn more and more researchers’ attention nowadays. Specifically, graph-theoretic methods have been widely used in homology identification, side-chain cluster identification, peptide sequencing and so on. This paper reviews several methods in solving protein structure identification problems using graph theory. We mainly introduce classical methods and mathematical models including homology modeling based on clique finding, identification of side-chain clusters in protein structures upon graph spectrum, and de novo peptide sequencing via tandem mass spectrometry using the spectrum graph model. In addition, concluding remarks and future priorities of each method are given. PMID:22165974

  9. Toward the optimization of normalized graph Laplacian.

    PubMed

    Xie, Bo; Wang, Meng; Tao, Dacheng

    2011-04-01

    Normalized graph Laplacian has been widely used in many practical machine learning algorithms, e.g., spectral clustering and semisupervised learning. However, all of them use the Euclidean distance to construct the graph Laplacian, which does not necessarily reflect the inherent distribution of the data. In this brief, we propose a method to directly optimize the normalized graph Laplacian by using pairwise constraints. The learned graph is consistent with equivalence and nonequivalence pairwise relationships, and thus it can better represent similarity between samples. Meanwhile, our approach, unlike metric learning, automatically determines the scale factor during the optimization. The learned normalized Laplacian matrix can be directly applied in spectral clustering and semisupervised learning algorithms. Comprehensive experiments demonstrate the effectiveness of the proposed approach.

  10. A unified framework for building high performance DVEs

    NASA Astrophysics Data System (ADS)

    Lei, Kaibin; Ma, Zhixia; Xiong, Hua

    2011-10-01

    A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.

  11. Application of dynamic uncertain causality graph in spacecraft fault diagnosis: Logic cycle

    NASA Astrophysics Data System (ADS)

    Yao, Quanying; Zhang, Qin; Liu, Peng; Yang, Ping; Zhu, Ma; Wang, Xiaochen

    2017-04-01

    Intelligent diagnosis system are applied to fault diagnosis in spacecraft. Dynamic Uncertain Causality Graph (DUCG) is a new probability graphic model with many advantages. In the knowledge expression of spacecraft fault diagnosis, feedback among variables is frequently encountered, which may cause directed cyclic graphs (DCGs). Probabilistic graphical models (PGMs) such as bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning, but BN does not allow DCGs. In this paper, DUGG is applied to fault diagnosis in spacecraft: introducing the inference algorithm for the DUCG to deal with feedback. Now, DUCG has been tested in 16 typical faults with 100% diagnosis accuracy.

  12. Network discovery with DCM

    PubMed Central

    Friston, Karl J.; Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E.

    2011-01-01

    This paper is about inferring or discovering the functional architecture of distributed systems using Dynamic Causal Modelling (DCM). We describe a scheme that recovers the (dynamic) Bayesian dependency graph (connections in a network) using observed network activity. This network discovery uses Bayesian model selection to identify the sparsity structure (absence of edges or connections) in a graph that best explains observed time-series. The implicit adjacency matrix specifies the form of the network (e.g., cyclic or acyclic) and its graph-theoretical attributes (e.g., degree distribution). The scheme is illustrated using functional magnetic resonance imaging (fMRI) time series to discover functional brain networks. Crucially, it can be applied to experimentally evoked responses (activation studies) or endogenous activity in task-free (resting state) fMRI studies. Unlike conventional approaches to network discovery, DCM permits the analysis of directed and cyclic graphs. Furthermore, it eschews (implausible) Markovian assumptions about the serial independence of random fluctuations. The scheme furnishes a network description of distributed activity in the brain that is optimal in the sense of having the greatest conditional probability, relative to other networks. The networks are characterised in terms of their connectivity or adjacency matrices and conditional distributions over the directed (and reciprocal) effective connectivity between connected nodes or regions. We envisage that this approach will provide a useful complement to current analyses of functional connectivity for both activation and resting-state studies. PMID:21182971

  13. Focus-based filtering + clustering technique for power-law networks with small world phenomenon

    NASA Astrophysics Data System (ADS)

    Boutin, François; Thièvre, Jérôme; Hascoët, Mountaz

    2006-01-01

    Realistic interaction networks usually present two main properties: a power-law degree distribution and a small world behavior. Few nodes are linked to many nodes and adjacent nodes are likely to share common neighbors. Moreover, graph structure usually presents a dense core that is difficult to explore with classical filtering and clustering techniques. In this paper, we propose a new filtering technique accounting for a user-focus. This technique extracts a tree-like graph with also power-law degree distribution and small world behavior. Resulting structure is easily drawn with classical force-directed drawing algorithms. It is also quickly clustered and displayed into a multi-level silhouette tree (MuSi-Tree) from any user-focus. We built a new graph filtering + clustering + drawing API and report a case study.

  14. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  15. Advanced Cyber Attack Modeling Analysis and Visualization

    DTIC Science & Technology

    2010-03-01

    Graph Analysis Network Web Logs Netflow Data TCP Dump Data System Logs Detect Protect Security Management What-If Figure 8. TVA attack graphs for...Clustered Graphs,” in Proceedings of the Symposium on Graph Drawing, September 1996. [25] K. Lakkaraju, W. Yurcik, A. Lee, “NVisionIP: NetFlow

  16. A Multilevel Gamma-Clustering Layout Algorithm for Visualization of Biological Networks

    PubMed Central

    Hruz, Tomas; Lucas, Christoph; Laule, Oliver; Zimmermann, Philip

    2013-01-01

    Visualization of large complex networks has become an indispensable part of systems biology, where organisms need to be considered as one complex system. The visualization of the corresponding network is challenging due to the size and density of edges. In many cases, the use of standard visualization algorithms can lead to high running times and poorly readable visualizations due to many edge crossings. We suggest an approach that analyzes the structure of the graph first and then generates a new graph which contains specific semantic symbols for regular substructures like dense clusters. We propose a multilevel gamma-clustering layout visualization algorithm (MLGA) which proceeds in three subsequent steps: (i) a multilevel γ-clustering is used to identify the structure of the underlying network, (ii) the network is transformed to a tree, and (iii) finally, the resulting tree which shows the network structure is drawn using a variation of a force-directed algorithm. The algorithm has a potential to visualize very large networks because it uses modern clustering heuristics which are optimized for large graphs. Moreover, most of the edges are removed from the visual representation which allows keeping the overview over complex graphs with dense subgraphs. PMID:23864855

  17. Evaluation of the procedure 1A component of the 1980 US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Chapman, G. M. (Principal Investigator); Carnes, J. G.

    1981-01-01

    Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.

  18. Hybrid coexpression link similarity graph clustering for mining biological modules from multiple gene expression datasets.

    PubMed

    Salem, Saeed; Ozcaglar, Cagri

    2014-01-01

    Advances in genomic technologies have enabled the accumulation of vast amount of genomic data, including gene expression data for multiple species under various biological and environmental conditions. Integration of these gene expression datasets is a promising strategy to alleviate the challenges of protein functional annotation and biological module discovery based on a single gene expression data, which suffers from spurious coexpression. We propose a joint mining algorithm that constructs a weighted hybrid similarity graph whose nodes are the coexpression links. The weight of an edge between two coexpression links in this hybrid graph is a linear combination of the topological similarities and co-appearance similarities of the corresponding two coexpression links. Clustering the weighted hybrid similarity graph yields recurrent coexpression link clusters (modules). Experimental results on Human gene expression datasets show that the reported modules are functionally homogeneous as evident by their enrichment with biological process GO terms and KEGG pathways.

  19. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  20. A Machine Learning Concept for DTN Routing

    NASA Technical Reports Server (NTRS)

    Dudukovich, Rachel; Hylton, Alan; Papachristou, Christos

    2017-01-01

    This paper discusses the concept and architecture of a machine learning based router for delay tolerant space networks. The techniques of reinforcement learning and Bayesian learning are used to supplement the routing decisions of the popular Contact Graph Routing algorithm. An introduction to the concepts of Contact Graph Routing, Q-routing and Naive Bayes classification are given. The development of an architecture for a cross-layer feedback framework for DTN (Delay-Tolerant Networking) protocols is discussed. Finally, initial simulation setup and results are given.

  1. Scalable Static and Dynamic Community Detection Using Grappolo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halappanavar, Mahantesh; Lu, Hao; Kalyanaraman, Anantharaman

    Graph clustering, popularly known as community detection, is a fundamental kernel for several applications of relevance to the Defense Advanced Research Projects Agency’s (DARPA) Hierarchical Identify Verify Exploit (HIVE) Pro- gram. Clusters or communities represent natural divisions within a network that are densely connected within a cluster and sparsely connected to the rest of the network. The need to compute clustering on large scale data necessitates the development of efficient algorithms that can exploit modern architectures that are fundamentally parallel in nature. How- ever, due to their irregular and inherently sequential nature, many of the current algorithms for community detectionmore » are challenging to parallelize. In response to the HIVE Graph Challenge, we present several parallelization heuristics for fast community detection using the Louvain method as the serial template. We implement all the heuristics in a software library called Grappolo. Using the inputs from the HIVE Challenge, we demonstrate superior performance and high quality solutions based on four parallelization heuristics. We use Grappolo on static graphs as the first step towards community detection on streaming graphs.« less

  2. Tensor Spectral Clustering for Partitioning Higher-order Network Structures.

    PubMed

    Benson, Austin R; Gleich, David F; Leskovec, Jure

    2015-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.

  3. Tensor Spectral Clustering for Partitioning Higher-order Network Structures

    PubMed Central

    Benson, Austin R.; Gleich, David F.; Leskovec, Jure

    2016-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399

  4. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-07-17

    The explosion of biological data has largely influenced the focus of today’s biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  5. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-06-01

    The explosion of biological data has largely influenced the focus of today's biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  6. Elastic K-means using posterior probability.

    PubMed

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.

  7. A Dynamic Bayesian Network Model for the Production and Inventory Control

    NASA Astrophysics Data System (ADS)

    Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol

    In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.

  8. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data

    PubMed Central

    Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369

  9. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data.

    PubMed

    Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.

  10. Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets

    DTIC Science & Technology

    2017-07-01

    principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for

  11. Shortest-path constraints for 3D multiobject semiautomatic segmentation via clustering and Graph Cut.

    PubMed

    Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy

    2013-11-01

    We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.

  12. The identification of credit card encoders by hierarchical cluster analysis of the jitters of magnetic stripes.

    PubMed

    Leung, S C; Fung, W K; Wong, K H

    1999-01-01

    The relative bit density variation graphs of 207 specimen credit cards processed by 12 encoding machines were examined first visually, and then classified by means of hierarchical cluster analysis. Twenty-nine credit cards being treated as 'questioned' samples were tested by way of cluster analysis against 'controls' derived from known encoders. It was found that hierarchical cluster analysis provided a high accuracy of identification with all 29 'questioned' samples classified correctly. On the other hand, although visual comparison of jitter graphs was less discriminating, it was nevertheless capable of giving a reasonably accurate result.

  13. Thinking graphically: Connecting vision and cognition during graph comprehension.

    PubMed

    Ratwani, Raj M; Trafton, J Gregory; Boehm-Davis, Deborah A

    2008-03-01

    Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. PsycINFO Database Record (c) 2008 APA, all rights reserved

  14. Hybrid coexpression link similarity graph clustering for mining biological modules from multiple gene expression datasets

    PubMed Central

    2014-01-01

    Background Advances in genomic technologies have enabled the accumulation of vast amount of genomic data, including gene expression data for multiple species under various biological and environmental conditions. Integration of these gene expression datasets is a promising strategy to alleviate the challenges of protein functional annotation and biological module discovery based on a single gene expression data, which suffers from spurious coexpression. Results We propose a joint mining algorithm that constructs a weighted hybrid similarity graph whose nodes are the coexpression links. The weight of an edge between two coexpression links in this hybrid graph is a linear combination of the topological similarities and co-appearance similarities of the corresponding two coexpression links. Clustering the weighted hybrid similarity graph yields recurrent coexpression link clusters (modules). Experimental results on Human gene expression datasets show that the reported modules are functionally homogeneous as evident by their enrichment with biological process GO terms and KEGG pathways. PMID:25221624

  15. Elastic K-means using posterior probability

    PubMed Central

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756

  16. Learning a Health Knowledge Graph from Electronic Medical Records.

    PubMed

    Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David

    2017-07-20

    Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).

  17. Graph-Theoretic Analysis of Monomethyl Phosphate Clustering in Ionic Solutions.

    PubMed

    Han, Kyungreem; Venable, Richard M; Bryant, Anne-Marie; Legacy, Christopher J; Shen, Rong; Li, Hui; Roux, Benoît; Gericke, Arne; Pastor, Richard W

    2018-02-01

    All-atom molecular dynamics simulations combined with graph-theoretic analysis reveal that clustering of monomethyl phosphate dianion (MMP 2- ) is strongly influenced by the types and combinations of cations in the aqueous solution. Although Ca 2+ promotes the formation of stable and large MMP 2- clusters, K + alone does not. Nonetheless, clusters are larger and their link lifetimes are longer in mixtures of K + and Ca 2+ . This "synergistic" effect depends sensitively on the Lennard-Jones interaction parameters between Ca 2+ and the phosphorus oxygen and correlates with the hydration of the clusters. The pronounced MMP 2- clustering effect of Ca 2+ in the presence of K + is confirmed by Fourier transform infrared spectroscopy. The characterization of the cation-dependent clustering of MMP 2- provides a starting point for understanding cation-dependent clustering of phosphoinositides in cell membranes.

  18. Information jet: Handling noisy big data from weakly disconnected network

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.

  19. Fuzzy Intervals for Designing Structural Signature: An Application to Graphic Symbol Recognition

    NASA Astrophysics Data System (ADS)

    Luqman, Muhammad Muzzamil; Delalandre, Mathieu; Brouard, Thierry; Ramel, Jean-Yves; Lladós, Josep

    The motivation behind our work is to present a new methodology for symbol recognition. The proposed method employs a structural approach for representing visual associations in symbols and a statistical classifier for recognition. We vectorize a graphic symbol, encode its topological and geometrical information by an attributed relational graph and compute a signature from this structural graph. We have addressed the sensitivity of structural representations to noise, by using data adapted fuzzy intervals. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set. The Bayesian network is deployed in a supervised learning scenario for recognizing query symbols. The method has been evaluated for robustness against degradations & deformations on pre-segmented 2D linear architectural & electronic symbols from GREC databases, and for its recognition abilities on symbols with context noise i.e. cropped symbols.

  20. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  1. Statistical mechanics of high-density bond percolation

    NASA Astrophysics Data System (ADS)

    Timonin, P. N.

    2018-05-01

    High-density (HD) percolation describes the percolation of specific κ -clusters, which are the compact sets of sites each connected to κ nearest filled sites at least. It takes place in the classical patterns of independently distributed sites or bonds in which the ordinary percolation transition also exists. Hence, the study of series of κ -type HD percolations amounts to the description of classical clusters' structure for which κ -clusters constitute κ -cores nested one into another. Such data are needed for description of a number of physical, biological, and information properties of complex systems on random lattices, graphs, and networks. They range from magnetic properties of semiconductor alloys to anomalies in supercooled water and clustering in biological and social networks. Here we present the statistical mechanics approach to study HD bond percolation on an arbitrary graph. It is shown that the generating function for κ -clusters' size distribution can be obtained from the partition function of the specific q -state Potts-Ising model in the q →1 limit. Using this approach we find exact κ -clusters' size distributions for the Bethe lattice and Erdos-Renyi graph. The application of the method to Euclidean lattices is also discussed.

  2. A coherent graph-based semantic clustering and summarization approach for biomedical literature and a new summarization evaluation method.

    PubMed

    Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol

    2007-11-27

    A huge amount of biomedical textual information has been produced and collected in MEDLINE for decades. In order to easily utilize biomedical information in the free text, document clustering and text summarization together are used as a solution for text information overload problem. In this paper, we introduce a coherent graph-based semantic clustering and summarization approach for biomedical literature. Our extensive experimental results show the approach shows 45% cluster quality improvement and 72% clustering reliability improvement, in terms of misclassification index, over Bisecting K-means as a leading document clustering approach. In addition, our approach provides concise but rich text summary in key concepts and sentences. Our coherent biomedical literature clustering and summarization approach that takes advantage of ontology-enriched graphical representations significantly improves the quality of document clusters and understandability of documents through summaries.

  3. A coherent graph-based semantic clustering and summarization approach for biomedical literature and a new summarization evaluation method

    PubMed Central

    Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol

    2007-01-01

    Background A huge amount of biomedical textual information has been produced and collected in MEDLINE for decades. In order to easily utilize biomedical information in the free text, document clustering and text summarization together are used as a solution for text information overload problem. In this paper, we introduce a coherent graph-based semantic clustering and summarization approach for biomedical literature. Results Our extensive experimental results show the approach shows 45% cluster quality improvement and 72% clustering reliability improvement, in terms of misclassification index, over Bisecting K-means as a leading document clustering approach. In addition, our approach provides concise but rich text summary in key concepts and sentences. Conclusion Our coherent biomedical literature clustering and summarization approach that takes advantage of ontology-enriched graphical representations significantly improves the quality of document clusters and understandability of documents through summaries. PMID:18047705

  4. How mutation affects evolutionary games on graphs

    PubMed Central

    Allen, Benjamin; Traulsen, Arne; Tarnita, Corina E.; Nowak, Martin A.

    2011-01-01

    Evolutionary dynamics are affected by population structure, mutation rates and update rules. Spatial or network structure facilitates the clustering of strategies, which represents a mechanism for the evolution of cooperation. Mutation dilutes this effect. Here we analyze how mutation influences evolutionary clustering on graphs. We introduce new mathematical methods to evolutionary game theory, specifically the analysis of coalescing random walks via generating functions. These techniques allow us to derive exact identity-by-descent (IBD) probabilities, which characterize spatial assortment on lattices and Cayley trees. From these IBD probabilities we obtain exact conditions for the evolution of cooperation and other game strategies, showing the dual effects of graph topology and mutation rate. High mutation rates diminish the clustering of cooperators, hindering their evolutionary success. Our model can represent either genetic evolution with mutation, or social imitation processes with random strategy exploration. PMID:21473871

  5. Bayesian hierarchical models for cost-effectiveness analyses that use data from cluster randomized trials.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G

    2010-01-01

    Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where randomization is at the level of the cluster (for example, the hospital or primary care provider) rather than the individual. Costs (and outcomes) within clusters may be correlated so that the assumption made by standard bivariate regression models, that observations are independent, is incorrect. This study develops a flexible modeling framework to acknowledge the clustering in CEA that use CRTs. The authors extend previous Bayesian bivariate models for CEA of multicenter trials to recognize the specific form of clustering in CRTs. They develop new Bayesian hierarchical models (BHMs) that allow mean costs and outcomes, and also variances, to differ across clusters. They illustrate how each model can be applied using data from a large (1732 cases, 70 primary care providers) CRT evaluating alternative interventions for reducing postnatal depression. The analyses compare cost-effectiveness estimates from BHMs with standard bivariate regression models that ignore the data hierarchy. The BHMs show high levels of cost heterogeneity across clusters (intracluster correlation coefficient, 0.17). Compared with standard regression models, the BHMs yield substantially increased uncertainty surrounding the cost-effectiveness estimates, and altered point estimates. The authors conclude that ignoring clustering can lead to incorrect inferences. The BHMs that they present offer a flexible modeling framework that can be applied more generally to CEA that use CRTs.

  6. Graph Theory and Its Application in Educational Research: A Review and Integration.

    ERIC Educational Resources Information Center

    Tatsuoka, Maurice M.

    1986-01-01

    A nontechnical exposition of graph theory is presented, followed by survey of the literature on applications of graph theory in research in education and related disciplines. Applications include order-theoretic studies of the dimensionality of data sets, the investigation of hierarchical structures in various domains, and cluster analysis.…

  7. Solving the scalability issue in quantum-based refinement: Q|R#1.

    PubMed

    Zheng, Min; Moriarty, Nigel W; Xu, Yanting; Reimers, Jeffrey R; Afonine, Pavel V; Waller, Mark P

    2017-12-01

    Accurately refining biomacromolecules using a quantum-chemical method is challenging because the cost of a quantum-chemical calculation scales approximately as n m , where n is the number of atoms and m (≥3) is based on the quantum method of choice. This fundamental problem means that quantum-chemical calculations become intractable when the size of the system requires more computational resources than are available. In the development of the software package called Q|R, this issue is referred to as Q|R#1. A divide-and-conquer approach has been developed that fragments the atomic model into small manageable pieces in order to solve Q|R#1. Firstly, the atomic model of a crystal structure is analyzed to detect noncovalent interactions between residues, and the results of the analysis are represented as an interaction graph. Secondly, a graph-clustering algorithm is used to partition the interaction graph into a set of clusters in such a way as to minimize disruption to the noncovalent interaction network. Thirdly, the environment surrounding each individual cluster is analyzed and any residue that is interacting with a particular cluster is assigned to the buffer region of that particular cluster. A fragment is defined as a cluster plus its buffer region. The gradients for all atoms from each of the fragments are computed, and only the gradients from each cluster are combined to create the total gradients. A quantum-based refinement is carried out using the total gradients as chemical restraints. In order to validate this interaction graph-based fragmentation approach in Q|R, the entire atomic model of an amyloid cross-β spine crystal structure (PDB entry 2oNA) was refined.

  8. Nearest neighbor-density-based clustering methods for large hyperspectral images

    NASA Astrophysics Data System (ADS)

    Cariou, Claude; Chehdi, Kacem

    2017-10-01

    We address the problem of hyperspectral image (HSI) pixel partitioning using nearest neighbor - density-based (NN-DB) clustering methods. NN-DB methods are able to cluster objects without specifying the number of clusters to be found. Within the NN-DB approach, we focus on deterministic methods, e.g. ModeSeek, knnClust, and GWENN (standing for Graph WatershEd using Nearest Neighbors). These methods only require the availability of a k-nearest neighbor (kNN) graph based on a given distance metric. Recently, a new DB clustering method, called Density Peak Clustering (DPC), has received much attention, and kNN versions of it have quickly followed and showed their efficiency. However, NN-DB methods still suffer from the difficulty of obtaining the kNN graph due to the quadratic complexity with respect to the number of pixels. This is why GWENN was embedded into a multiresolution (MR) scheme to bypass the computation of the full kNN graph over the image pixels. In this communication, we propose to extent the MR-GWENN scheme on three aspects. Firstly, similarly to knnClust, the original labeling rule of GWENN is modified to account for local density values, in addition to the labels of previously processed objects. Secondly, we set up a modified NN search procedure within the MR scheme, in order to stabilize of the number of clusters found from the coarsest to the finest spatial resolution. Finally, we show that these extensions can be easily adapted to the three other NN-DB methods (ModeSeek, knnClust, knnDPC) for pixel clustering in large HSIs. Experiments are conducted to compare the four NN-DB methods for pixel clustering in HSIs. We show that NN-DB methods can outperform a classical clustering method such as fuzzy c-means (FCM), in terms of classification accuracy, relevance of found clusters, and clustering speed. Finally, we demonstrate the feasibility and evaluate the performances of NN-DB methods on a very large image acquired by our AISA Eagle hyperspectral imaging sensor.

  9. An overview of the essential differences and similarities of system identification techniques

    NASA Technical Reports Server (NTRS)

    Mehra, Raman K.

    1991-01-01

    Information is given in the form of outlines, graphs, tables and charts. Topics include system identification, Bayesian statistical decision theory, Maximum Likelihood Estimation, identification methods, structural mode identification using a stochastic realization algorithm, and identification results regarding membrane simulations and X-29 flutter flight test data.

  10. Wedge sampling for computing clustering coefficients and triangle counts on large graphs

    DOE PAGES

    Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.

    2014-05-08

    Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less

  11. Potential of SNP markers for the characterization of Brazilian cassava germplasm.

    PubMed

    de Oliveira, Eder Jorge; Ferreira, Cláudia Fortes; da Silva Santos, Vanderlei; de Jesus, Onildo Nunes; Oliveira, Gilmara Alvarenga Fachardo; da Silva, Maiane Suzarte

    2014-06-01

    High-throughput markers, such as SNPs, along with different methodologies were used to evaluate the applicability of the Bayesian approach and the multivariate analysis in structuring the genetic diversity in cassavas. The objective of the present work was to evaluate the diversity and genetic structure of the largest cassava germplasm bank in Brazil. Complementary methodological approaches such as discriminant analysis of principal components (DAPC), Bayesian analysis and molecular analysis of variance (AMOVA) were used to understand the structure and diversity of 1,280 accessions genotyped using 402 single nucleotide polymorphism markers. The genetic diversity (0.327) and the average observed heterozygosity (0.322) were high considering the bi-allelic markers. In terms of population, the presence of a complex genetic structure was observed indicating the formation of 30 clusters by DAPC and 34 clusters by Bayesian analysis. Both methodologies presented difficulties and controversies in terms of the allocation of some accessions to specific clusters. However, the clusters suggested by the DAPC analysis seemed to be more consistent for presenting higher probability of allocation of the accessions within the clusters. Prior information related to breeding patterns and geographic origins of the accessions were not sufficient for providing clear differentiation between the clusters according to the AMOVA analysis. In contrast, the F ST was maximized when considering the clusters suggested by the Bayesian and DAPC analyses. The high frequency of germplasm exchange between producers and the subsequent alteration of the name of the same material may be one of the causes of the low association between genetic diversity and geographic origin. The results of this study may benefit cassava germplasm conservation programs, and contribute to the maximization of genetic gains in breeding programs.

  12. Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.

    ERIC Educational Resources Information Center

    Goetschel, Roy; Voxman, William

    1987-01-01

    Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)

  13. Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis

    DTIC Science & Technology

    2015-01-01

    ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large

  14. Bayesian networks improve causal environmental ...

    EPA Pesticide Factsheets

    Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value

  15. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  16. Unsupervised active learning based on hierarchical graph-theoretic clustering.

    PubMed

    Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve

    2009-10-01

    Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.

  17. Algebraic approach to small-world network models

    NASA Astrophysics Data System (ADS)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

  18. Detecting cancer clusters in a regional population with local cluster tests and Bayesian smoothing methods: a simulation study

    PubMed Central

    2013-01-01

    Background There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data aggregation. Conclusion High resolution spatial scales seem more appropriate as data base for cancer cluster testing and monitoring than the commonly used aggregated scales. We suggest the development of a two-stage approach that combines methods with high detection rates as a first-line screening with methods of higher predictive ability at the second stage. PMID:24314148

  19. Confidence of compliance: a Bayesian approach for percentile standards.

    PubMed

    McBride, G B; Ellis, J C

    2001-04-01

    Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.

  20. GraphTeams: a method for discovering spatial gene clusters in Hi-C sequencing data.

    PubMed

    Schulz, Tizian; Stoye, Jens; Doerr, Daniel

    2018-05-08

    Hi-C sequencing offers novel, cost-effective means to study the spatial conformation of chromosomes. We use data obtained from Hi-C experiments to provide new evidence for the existence of spatial gene clusters. These are sets of genes with associated functionality that exhibit close proximity to each other in the spatial conformation of chromosomes across several related species. We present the first gene cluster model capable of handling spatial data. Our model generalizes a popular computational model for gene cluster prediction, called δ-teams, from sequences to graphs. Following previous lines of research, we subsequently extend our model to allow for several vertices being associated with the same label. The model, called δ-teams with families, is particular suitable for our application as it enables handling of gene duplicates. We develop algorithmic solutions for both models. We implemented the algorithm for discovering δ-teams with families and integrated it into a fully automated workflow for discovering gene clusters in Hi-C data, called GraphTeams. We applied it to human and mouse data to find intra- and interchromosomal gene cluster candidates. The results include intrachromosomal clusters that seem to exhibit a closer proximity in space than on their chromosomal DNA sequence. We further discovered interchromosomal gene clusters that contain genes from different chromosomes within the human genome, but are located on a single chromosome in mouse. By identifying δ-teams with families, we provide a flexible model to discover gene cluster candidates in Hi-C data. Our analysis of Hi-C data from human and mouse reveals several known gene clusters (thus validating our approach), but also few sparsely studied or possibly unknown gene cluster candidates that could be the source of further experimental investigations.

  1. Assessment of semantic similarity between proteins using information content and topological properties of the Gene Ontology graph.

    PubMed

    Dutta, Pritha; Basu, Subhadip; Kundu, Mahantapas

    2017-03-31

    The semantic similarity between two interacting proteins can be estimated by combining the similarity scores of the GO terms associated with the proteins. Greater number of similar GO annotations between two proteins indicates greater interaction affinity. Existing semantic similarity measures make use of the GO graph structure, the information content of GO terms, or a combination of both. In this paper, we present a hybrid approach which utilizes both the topological features of the GO graph and information contents of the GO terms. More specifically, we 1) consider a fuzzy clustering of the GO graph based on the level of association of the GO terms, 2) estimate the GO term memberships to each cluster center based on the respective shortest path lengths, and 3) assign weightage to GO term pairs on the basis of their dissimilarity with respect to the cluster centers. We test the performance of our semantic similarity measure against seven other previously published similarity measures using benchmark protein-protein interaction datasets of Homo sapiens and Saccharomyces cerevisiae based on sequence similarity, Pfam similarity, area under ROC curve and F1 measure.

  2. A Single Session of rTMS Enhances Small-Worldness in Writer's Cramp: Evidence from Simultaneous EEG-fMRI Multi-Modal Brain Graph.

    PubMed

    Bharath, Rose D; Panda, Rajanikant; Reddam, Venkateswara Reddy; Bhaskar, M V; Gohel, Suril; Bhardwaj, Sujas; Prajapati, Arvind; Pal, Pramod Kumar

    2017-01-01

    Background and Purpose : Repetitive transcranial magnetic stimulation (rTMS) induces widespread changes in brain connectivity. As the network topology differences induced by a single session of rTMS are less known we undertook this study to ascertain whether the network alterations had a small-world morphology using multi-modal graph theory analysis of simultaneous EEG-fMRI. Method : Simultaneous EEG-fMRI was acquired in duplicate before (R1) and after (R2) a single session of rTMS in 14 patients with Writer's Cramp (WC). Whole brain neuronal and hemodynamic network connectivity were explored using the graph theory measures and clustering coefficient, path length and small-world index were calculated for EEG and resting state fMRI (rsfMRI). Multi-modal graph theory analysis was used to evaluate the correlation of EEG and fMRI clustering coefficients. Result : A single session of rTMS was found to increase the clustering coefficient and small-worldness significantly in both EEG and fMRI ( p < 0.05). Multi-modal graph theory analysis revealed significant modulations in the fronto-parietal regions immediately after rTMS. The rsfMRI revealed additional modulations in several deep brain regions including cerebellum, insula and medial frontal lobe. Conclusion : Multi-modal graph theory analysis of simultaneous EEG-fMRI can supplement motor physiology methods in understanding the neurobiology of rTMS in vivo . Coinciding evidence from EEG and rsfMRI reports small-world morphology for the acute phase network hyper-connectivity indicating changes ensuing low-frequency rTMS is probably not "noise".

  3. GrouseFlocks: steerable exploration of graph hierarchy space.

    PubMed

    Archambault, Daniel; Munzner, Tamara; Auber, David

    2008-01-01

    Several previous systems allow users to interactively explore a large input graph through cuts of a superimposed hierarchy. This hierarchy is often created using clustering algorithms or topological features present in the graph. However, many graphs have domain-specific attributes associated with the nodes and edges, which could be used to create many possible hierarchies providing unique views of the input graph. GrouseFlocks is a system for the exploration of this graph hierarchy space. By allowing users to see several different possible hierarchies on the same graph, the system helps users investigate graph hierarchy space instead of a single fixed hierarchy. GrouseFlocks provides a simple set of operations so that users can create and modify their graph hierarchies based on selections. These selections can be made manually or based on patterns in the attribute data provided with the graph. It provides feedback to the user within seconds, allowing interactive exploration of this space.

  4. Network Analysis Tools: from biological networks to clusters and pathways.

    PubMed

    Brohée, Sylvain; Faust, Karoline; Lima-Mendez, Gipsi; Vanderstocken, Gilles; van Helden, Jacques

    2008-01-01

    Network Analysis Tools (NeAT) is a suite of computer tools that integrate various algorithms for the analysis of biological networks: comparison between graphs, between clusters, or between graphs and clusters; network randomization; analysis of degree distribution; network-based clustering and path finding. The tools are interconnected to enable a stepwise analysis of the network through a complete analytical workflow. In this protocol, we present a typical case of utilization, where the tasks above are combined to decipher a protein-protein interaction network retrieved from the STRING database. The results returned by NeAT are typically subnetworks, networks enriched with additional information (i.e., clusters or paths) or tables displaying statistics. Typical networks comprising several thousands of nodes and arcs can be analyzed within a few minutes. The complete protocol can be read and executed in approximately 1 h.

  5. Impaired cerebral blood flow networks in temporal lobe epilepsy with hippocampal sclerosis: A graph theoretical approach.

    PubMed

    Sone, Daichi; Matsuda, Hiroshi; Ota, Miho; Maikusa, Norihide; Kimura, Yukio; Sumida, Kaoru; Yokoyama, Kota; Imabayashi, Etsuko; Watanabe, Masako; Watanabe, Yutaka; Okazaki, Mitsutoshi; Sato, Noriko

    2016-09-01

    Graph theory is an emerging method to investigate brain networks. Altered cerebral blood flow (CBF) has frequently been reported in temporal lobe epilepsy (TLE), but graph theoretical findings of CBF are poorly understood. Here, we explored graph theoretical networks of CBF in TLE using arterial spin labeling imaging. We recruited patients with TLE and unilateral hippocampal sclerosis (HS) (19 patients with left TLE, and 21 with right TLE) and 20 gender- and age-matched healthy control subjects. We obtained all participants' CBF maps using pseudo-continuous arterial spin labeling and analyzed them using the Graph Analysis Toolbox (GAT) software program. As a result, compared to the controls, the patients with left TLE showed a significantly low clustering coefficient (p=0.024), local efficiency (p=0.001), global efficiency (p=0.010), and high transitivity (p=0.015), whereas the patients with right TLE showed significantly high assortativity (p=0.046) and transitivity (p=0.011). The group with right TLE also had high characteristic path length values (p=0.085), low global efficiency (p=0.078), and low resilience to targeted attack (p=0.101) at a trend level. Lower normalized clustering coefficient (p=0.081) in the left TLE and higher normalized characteristic path length (p=0.089) in the right TLE were found also at a trend level. Both the patients with left and right TLE showed significantly decreased clustering in similar areas, i.e., the cingulate gyri, precuneus, and occipital lobe. Our findings revealed differing left-right network metrics in which an inefficient CBF network in left TLE and vulnerability to irritation in right TLE are suggested. The left-right common finding of regional decreased clustering might reflect impaired default-mode networks in TLE. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Learning Bayesian Networks from Correlated Data

    NASA Astrophysics Data System (ADS)

    Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola

    2016-05-01

    Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.

  7. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  8. SpectralNET – an application for spectral graph analysis and visualization

    PubMed Central

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-01-01

    Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170

  9. SpectralNET--an application for spectral graph analysis and visualization.

    PubMed

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-10-19

    Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.

  10. Bayesian Networks Improve Causal Environmental Assessments for Evidence-Based Policy.

    PubMed

    Carriger, John F; Barron, Mace G; Newman, Michael C

    2016-12-20

    Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on valued ecological resources. These aspects are demonstrated through hypothetical problem scenarios that explore some major benefits of using Bayesian networks for reasoning and making inferences in evidence-based policy.

  11. Efficient computation of k-Nearest Neighbour Graphs for large high-dimensional data sets on GPU clusters.

    PubMed

    Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M

    2013-01-01

    This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.

  12. SOMBI: Bayesian identification of parameter relations in unstructured cosmological data

    NASA Astrophysics Data System (ADS)

    Frank, Philipp; Jasche, Jens; Enßlin, Torsten A.

    2016-11-01

    This work describes the implementation and application of a correlation determination method based on self organizing maps and Bayesian inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the self organizing map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian information criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide applications of our method to cosmological data. In particular, we present results of a correlation analysis between galaxy and active galactic nucleus (AGN) properties provided by the SDSS catalog with the cosmic large-scale-structure (LSS). The results indicate that the combined galaxy and LSS dataset indeed is clustered into several sub-samples of data with different average properties (for example different stellar masses or web-type classifications). The majority of data clusters appear to have a similar correlation structure between galaxy properties and the LSS. In particular we revealed a positive and linear dependency between the stellar mass, the absolute magnitude and the color of a galaxy with the corresponding cosmic density field. A remaining subset of data shows inverted correlations, which might be an artifact of non-linear redshift distortions.

  13. Individual classification of Alzheimer's disease with diffusion magnetic resonance imaging.

    PubMed

    Schouten, Tijn M; Koini, Marisa; Vos, Frank de; Seiler, Stephan; Rooij, Mark de; Lechner, Anita; Schmidt, Reinhold; Heuvel, Martijn van den; Grond, Jeroen van der; Rombouts, Serge A R B

    2017-05-15

    Diffusion magnetic resonance imaging (MRI) is a powerful non-invasive method to study white matter integrity, and is sensitive to detect differences in Alzheimer's disease (AD) patients. Diffusion MRI may be able to contribute towards reliable diagnosis of AD. We used diffusion MRI to classify AD patients (N=77), and controls (N=173). We use different methods to extract information from the diffusion MRI data. First, we use the voxel-wise diffusion tensor measures that have been skeletonised using tract based spatial statistics. Second, we clustered the voxel-wise diffusion measures with independent component analysis (ICA), and extracted the mixing weights. Third, we determined structural connectivity between Harvard Oxford atlas regions with probabilistic tractography, as well as graph measures based on these structural connectivity graphs. Classification performance for voxel-wise measures ranged between an AUC of 0.888, and 0.902. The ICA-clustered measures ranged between an AUC of 0.893, and 0.920. The AUC for the structural connectivity graph was 0.900, while graph measures based upon this graph ranged between an AUC of 0.531, and 0.840. All measures combined with a sparse group lasso resulted in an AUC of 0.896. Overall, fractional anisotropy clustered into ICA components was the best performing measure. These findings may be useful for future incorporation of diffusion MRI into protocols for AD classification, or as a starting point for early detection of AD using diffusion MRI. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  15. DTS: Building custom, intelligent schedulers

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1994-01-01

    DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

  16. Decision net, directed graph, and neural net processing of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki; Barnard, Etienne

    1989-01-01

    A decision-net solution involving a novel hierarchical classifier and a set of multiple directed graphs, as well as a neural-net solution, are respectively presented for large-class problem and mixture problem treatments of imaging spectrometer data. The clustering method for hierarchical classifier design, when used with multiple directed graphs, yields an efficient decision net. New directed-graph rules for reducing local maxima as well as the number of perturbations required, and the new starting-node rules for extending the reachability and reducing the search time of the graphs, are noted to yield superior results, as indicated by an illustrative 500-class imaging spectrometer problem.

  17. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  18. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  19. Graph configuration model based evaluation of the education-occupation match

    PubMed Central

    2018-01-01

    To study education—occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education. PMID:29509783

  20. Graph configuration model based evaluation of the education-occupation match.

    PubMed

    Gadar, Laszlo; Abonyi, Janos

    2018-01-01

    To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.

  1. Determining open cluster membership. A Bayesian framework for quantitative member classification

    NASA Astrophysics Data System (ADS)

    Stott, Jonathan J.

    2018-01-01

    Aims: My goal is to develop a quantitative algorithm for assessing open cluster membership probabilities. The algorithm is designed to work with single-epoch observations. In its simplest form, only one set of program images and one set of reference images are required. Methods: The algorithm is based on a two-stage joint astrometric and photometric assessment of cluster membership probabilities. The probabilities were computed within a Bayesian framework using any available prior information. Where possible, the algorithm emphasizes simplicity over mathematical sophistication. Results: The algorithm was implemented and tested against three observational fields using published survey data. M 67 and NGC 654 were selected as cluster examples while a third, cluster-free, field was used for the final test data set. The algorithm shows good quantitative agreement with the existing surveys and has a false-positive rate significantly lower than the astrometric or photometric methods used individually.

  2. Associating clinical archetypes through UMLS Metathesaurus term clusters.

    PubMed

    Lezcano, Leonardo; Sánchez-Alonso, Salvador; Sicilia, Miguel-Angel

    2012-06-01

    Clinical archetypes are modular definitions of clinical data, expressed using standard or open constraint-based data models as the CEN EN13606 and openEHR. There is an increasing archetype specification activity that raises the need for techniques to associate archetypes to support better management and user navigation in archetype repositories. This paper reports on a computational technique to generate tentative archetype associations by mapping them through term clusters obtained from the UMLS Metathesaurus. The terms are used to build a bipartite graph model and graph connectivity measures can be used for deriving associations.

  3. Buried landmine detection using multivariate normal clustering

    NASA Astrophysics Data System (ADS)

    Duston, Brian M.

    2001-10-01

    A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.

  4. Intuitive color-based visualization of multimedia content as large graphs

    NASA Astrophysics Data System (ADS)

    Delest, Maylis; Don, Anthony; Benois-Pineau, Jenny

    2004-06-01

    Data visualization techniques are penetrating in various technological areas. In the field of multimedia such as information search and retrieval in multimedia archives, or digital media production and post-production, data visualization methodologies based on large graphs give an exciting alternative to conventional storyboard visualization. In this paper we develop a new approach to visualization of multimedia (video) documents based both on large graph clustering and preliminary video segmenting and indexing.

  5. Scale-free Graphs for General Aviation Flight Schedules

    NASA Technical Reports Server (NTRS)

    Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.

    2003-01-01

    In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.

  6. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    NASA Astrophysics Data System (ADS)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  7. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  8. Time-dependence of graph theory metrics in functional connectivity analysis

    PubMed Central

    Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J.; Haneef, Zulfi; Stern, John M.

    2016-01-01

    Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. PMID:26518632

  9. Time-dependence of graph theory metrics in functional connectivity analysis.

    PubMed

    Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J; Haneef, Zulfi; Stern, John M

    2016-01-15

    Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Exploratory Visualization of Graphs Based on Community Structure

    ERIC Educational Resources Information Center

    Liu, Yujie

    2013-01-01

    Communities, also called clusters or modules, are groups of nodes which probably share common properties and/or play similar roles within a graph. They widely exist in real networks such as biological, social, and information networks. Allowing users to interactively browse and explore the community structure, which is essential for understanding…

  11. Improved graph clustering

    DTIC Science & Technology

    2013-01-01

    5, pp. 75–174, 2010. [2] J. Leskovec, K. J. Lang, A. Dasgupta, and M. W. Mahoney , “Statistical properties of community structure in large social and...2011. [14] R. R. Nadakuditi and M. Newman , “Graph spectra and the detectability of community structure in networks,” Phys. Rev. Lett., vol. 108, no

  12. Matched Filter Stochastic Background Characterization for Hyperspectral Target Detection

    DTIC Science & Technology

    2005-09-30

    and Pre- Clustering MVN Test.....................126 4.2.3 Pre- Clustering Detection Results.................................................130...4.2.4 Pre- Clustering Target Influence..................................................134 4.2.5 Statistical Distance Exclusion and Low Contrast...al, 2001] Figure 2.7 ROC Curve Comparison of RX, K-Means, and Bayesian Pre- Clustering Applied to Anomaly Detection [Ashton, 1998] Figure 2.8 ROC

  13. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  14. Understanding spatial connectivity of individuals with non-uniform population density.

    PubMed

    Wang, Pu; González, Marta C

    2009-08-28

    We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.

  15. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.

  16. Massive Social Network Analysis: Mining Twitter for Social Good

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ediger, David; Jiang, Karl; Riedy, Edward J.

    Social networks produce an enormous quantity of data. Facebook consists of over 400 million active users sharing over 5 billion pieces of information each month. Analyzing this vast quantity of unstructured data presents challenges for software and hardware. We present GraphCT, a Graph Characterization Tooklit for massive graphs representing social network data. On a 128-processor Cray XMT, GraphCT estimates the betweenness centrality of an artificially generated (R-MAT) 537 million vertex, 8.6 billion edge graph in 55 minutes. We use GraphCT to analyze public data from Twitter, a microblogging network. Twitter's message connections appear primarily tree-structured as a news dissemination system.more » Within the public data, however, are clusters of conversations. Using GraphCT, we can rank actors within these conversations and help analysts focus attention on a much smaller data subset.« less

  17. Convergence of the Graph Allen-Cahn Scheme

    NASA Astrophysics Data System (ADS)

    Luo, Xiyang; Bertozzi, Andrea L.

    2017-05-01

    The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.

  18. Overlapping clusters for distributed computation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initialmore » partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.« less

  19. On Learning Cluster Coefficient of Private Networks

    PubMed Central

    Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang

    2013-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843

  20. Functional connectivity and graph theory in preclinical Alzheimer's disease.

    PubMed

    Brier, Matthew R; Thomas, Jewell B; Fagan, Anne M; Hassenstab, Jason; Holtzman, David M; Benzinger, Tammie L; Morris, John C; Ances, Beau M

    2014-04-01

    Alzheimer's disease (AD) has a long preclinical phase in which amyloid and tau cerebral pathology accumulate without producing cognitive symptoms. Resting state functional connectivity magnetic resonance imaging has demonstrated that brain networks degrade during symptomatic AD. It is unclear to what extent these degradations exist before symptomatic onset. In this study, we investigated graph theory metrics of functional integration (path length), functional segregation (clustering coefficient), and functional distinctness (modularity) as a function of disease severity. Further, we assessed whether these graph metrics were affected in cognitively normal participants with cerebrospinal fluid evidence of preclinical AD. Clustering coefficient and modularity, but not path length, were reduced in AD. Cognitively normal participants who harbored AD biomarker pathology also showed reduced values in these graph measures, demonstrating brain changes similar to, but smaller than, symptomatic AD. Only modularity was significantly affected by age. We also demonstrate that AD has a particular effect on hub-like regions in the brain. We conclude that AD causes large-scale disconnection that is present before onset of symptoms. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Mining chemical reactions using neighborhood behavior and condensed graphs of reactions approaches.

    PubMed

    de Luca, Aurélie; Horvath, Dragos; Marcou, Gilles; Solov'ev, Vitaly; Varnek, Alexandre

    2012-09-24

    This work addresses the problem of similarity search and classification of chemical reactions using Neighborhood Behavior (NB) and Condensed Graphs of Reaction (CGR) approaches. The CGR formalism represents chemical reactions as a classical molecular graph with dynamic bonds, enabling descriptor calculations on this graph. Different types of the ISIDA fragment descriptors generated for CGRs in combination with two metrics--Tanimoto and Euclidean--were considered as chemical spaces, to serve for reaction dissimilarity scoring. The NB method has been used to select an optimal combination of descriptors which distinguish different types of chemical reactions in a database containing 8544 reactions of 9 classes. Relevance of NB analysis has been validated in generic (multiclass) similarity search and in clustering with Self-Organizing Maps (SOM). NB-compliant sets of descriptors were shown to display enhanced mapping propensities, allowing the construction of better Self-Organizing Maps and similarity searches (NB and classical similarity search criteria--AUC ROC--correlate at a level of 0.7). The analysis of the SOM clusters proved chemically meaningful CGR substructures representing specific reaction signatures.

  2. Consensus, Polarization and Clustering of Opinions in Social Networks

    DTIC Science & Technology

    2013-06-01

    values of τ , and consensus at larger values. Fig. 6 compares the phase transitions for three different network configurations: RGG, Erdos- Renyi graph and...Erdos- Renyi graph [25] is generated uniformly at random from the collection of all graphs which have n = 50 nodes and M = 120 edges. The small- world...0.6 0.8 1 Threshold τ N or m al iz ed A lg eb ra ic C on ne ct iv ity RGG Erdos− Renyi Small−World Fig. 6. Phase transitions using three

  3. Corona graphs as a model of small-world networks

    NASA Astrophysics Data System (ADS)

    Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi

    2015-11-01

    We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.

  4. Spatial cluster detection using dynamic programming.

    PubMed

    Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F

    2012-03-25

    The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.

  5. Spatial cluster detection using dynamic programming

    PubMed Central

    2012-01-01

    Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103

  6. Conformational Transition Pathways of Epidermal Growth Factor Receptor Kinase Domain from Multiple Molecular Dynamics Simulations and Bayesian Clustering.

    PubMed

    Li, Yan; Li, Xiang; Ma, Weiya; Dong, Zigang

    2014-08-12

    The epidermal growth factor receptor (EGFR) is aberrantly activated in various cancer cells and an important target for cancer treatment. Deep understanding of EGFR conformational changes between the active and inactive states is of pharmaceutical interest. Here we present a strategy combining multiply targeted molecular dynamics simulations, unbiased molecular dynamics simulations, and Bayesian clustering to investigate transition pathways during the activation/inactivation process of EGFR kinase domain. Two distinct pathways between the active and inactive forms are designed, explored, and compared. Based on Bayesian clustering and rough two-dimensional free energy surfaces, the energy-favorable pathway is recognized, though DFG-flip happens in both pathways. In addition, another pathway with different intermediate states appears in our simulations. Comparison of distinct pathways also indicates that disruption of the Lys745-Glu762 interaction is critically important in DFG-flip while movement of the A-loop significantly facilitates the conformational change. Our simulations yield new insights into EGFR conformational transitions. Moreover, our results verify that this approach is valid and efficient in sampling of protein conformational changes and comparison of distinct pathways.

  7. A model of language inflection graphs

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk; Farzad, Babak; Cao, Yi

    2014-01-01

    Inflection graphs are highly complex networks representing relationships between inflectional forms of words in human languages. For so-called synthetic languages, such as Latin or Polish, they have particularly interesting structure due to the abundance of inflectional forms. We construct the simplest form of inflection graphs, namely a bipartite graph in which one group of vertices corresponds to dictionary headwords and the other group to inflected forms encountered in a given text. We, then, study projection of this graph on the set of headwords. The projection decomposes into a large number of connected components, to be called word groups. Distribution of sizes of word group exhibits some remarkable properties, resembling cluster distribution in a lattice percolation near the critical point. We propose a simple model which produces graphs of this type, reproducing the desired component distribution and other topological features.

  8. In-Memory Graph Databases for Web-Scale Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Morari, Alessandro; Weaver, Jesse R.

    RDF databases have emerged as one of the most relevant way for organizing, integrating, and managing expo- nentially growing, often heterogeneous, and not rigidly structured data for a variety of scientific and commercial fields. In this paper we discuss the solutions integrated in GEMS (Graph database Engine for Multithreaded Systems), a software framework for implementing RDF databases on commodity, distributed-memory high-performance clusters. Unlike the majority of current RDF databases, GEMS has been designed from the ground up to primarily employ graph-based methods. This is reflected in all the layers of its stack. The GEMS framework is composed of: a SPARQL-to-C++more » compiler, a library of data structures and related methods to access and modify them, and a custom runtime providing lightweight software multithreading, network messages aggregation and a partitioned global address space. We provide an overview of the framework, detailing its component and how they have been closely designed and customized to address issues of graph methods applied to large-scale datasets on clusters. We discuss in details the principles that enable automatic translation of the queries (expressed in SPARQL, the query language of choice for RDF databases) to graph methods, and identify differences with respect to other RDF databases.« less

  9. Prediction of community prevalence of human onchocerciasis in the Amazonian onchocerciasis focus: Bayesian approach.

    PubMed Central

    Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria

    2003-01-01

    OBJECTIVE: To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. METHODS: Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. FINDINGS: A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. CONCLUSION: Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic. PMID:12973640

  10. Prediction of community prevalence of human onchocerciasis in the Amazonian onchocerciasis focus: Bayesian approach.

    PubMed

    Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria

    2003-01-01

    To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic.

  11. Dynamic Uncertain Causality Graph for Knowledge Representation and Reasoning: Utilization of Statistical Data and Domain Knowledge in Complex Cases.

    PubMed

    Zhang, Qin; Yao, Quanying

    2018-05-01

    The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.

  12. FUSE: a profit maximization approach for functional summarization of biological networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes; Yu, Hanry

    2012-03-21

    The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein interaction network (PPI) using graph theoretic analysis. Despite the recent progress, systems level analysis of PPIS remains a daunting task as it is challenging to make sense out of the deluge of high-dimensional interaction data. Specifically, techniques that automatically abstract and summarize PPIS at multiple resolutions to provide high level views of its functional landscape are still lacking. We present a novel data-driven and generic algorithm called FUSE (Functional Summary Generator) that generates functional maps of a PPI at different levels of organization, from broad process-process level interactions to in-depth complex-complex level interactions, through a pro t maximization approach that exploits Minimum Description Length (MDL) principle to maximize information gain of the summary graph while satisfying the level of detail constraint. We evaluate the performance of FUSE on several real-world PPIS. We also compare FUSE to state-of-the-art graph clustering methods with GO term enrichment by constructing the biological process landscape of the PPIS. Using AD network as our case study, we further demonstrate the ability of FUSE to quickly summarize the network and identify many different processes and complexes that regulate it. Finally, we study the higher-order connectivity of the human PPI. By simultaneously evaluating interaction and annotation data, FUSE abstracts higher-order interaction maps by reducing the details of the underlying PPI to form a functional summary graph of interconnected functional clusters. Our results demonstrate its effectiveness and superiority over state-of-the-art graph clustering methods with GO term enrichment.

  13. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  14. Bayesian Multi-Trait Analysis Reveals a Useful Tool to Increase Oil Concentration and to Decrease Toxicity in Jatropha curcas L.

    PubMed Central

    Silva Junqueira, Vinícius; de Azevedo Peixoto, Leonardo; Galvêas Laviola, Bruno; Lopes Bhering, Leonardo; Mendonça, Simone; Agostini Costa, Tania da Silveira; Antoniassi, Rosemar

    2016-01-01

    The biggest challenge for jatropha breeding is to identify superior genotypes that present high seed yield and seed oil content with reduced toxicity levels. Therefore, the objective of this study was to estimate genetic parameters for three important traits (weight of 100 seed, oil seed content, and phorbol ester concentration), and to select superior genotypes to be used as progenitors in jatropha breeding. Additionally, the genotypic values and the genetic parameters estimated under the Bayesian multi-trait approach were used to evaluate different selection indices scenarios of 179 half-sib families. Three different scenarios and economic weights were considered. It was possible to simultaneously reduce toxicity and increase seed oil content and weight of 100 seed by using index selection based on genotypic value estimated by the Bayesian multi-trait approach. Indeed, we identified two families that present these characteristics by evaluating genetic diversity using the Ward clustering method, which suggested nine homogenous clusters. Future researches must integrate the Bayesian multi-trait methods with realized relationship matrix, aiming to build accurate selection indices models. PMID:27281340

  15. Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    2007-01-01

    Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model.

  16. A sparse structure learning algorithm for Gaussian Bayesian Network identification from high-dimensional data.

    PubMed

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2013-06-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.

  17. A Sparse Structure Learning Algorithm for Gaussian Bayesian Network Identification from High-Dimensional Data

    PubMed Central

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2014-01-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720

  18. Co-Clustering by Bipartite Spectral Graph Partitioning for Out-of-Tutor Prediction

    ERIC Educational Resources Information Center

    Trivedi, Shubhendu; Pardos, Zachary A.; Sarkozy, Gabor N.; Heffernan, Neil T.

    2012-01-01

    Learning a more distributed representation of the input feature space is a powerful method to boost the performance of a given predictor. Often this is accomplished by partitioning the data into homogeneous groups by clustering so that separate models could be trained on each cluster. Intuitively each such predictor is a better representative of…

  19. Science - Image in Action

    NASA Astrophysics Data System (ADS)

    Zavidovique, Bertrand; Lo Bosco, Giosue'

    pt. A. Information: data organization and communication. Statistical information: a Bayesian perspective / R. B. Stern, C. A. de B. Pereira. Multi-a(ge)nt graph patrolling and partitioning / Y. Elor, A. M. Bruckstein. The role of noise in brain function / S. Roy, R. Llinas. F-granulation, generalized rough entropy and image analysis / S. K. Pal. Fast redshift clustering with the Baire (ultra) metric / F. Murtagh, P. Contreras. Interactive classification oriented superresolution of multispectral images / P. Ruiz ... [et al.]. Blind processing in astrophysical data analysis / E. Salerno, L. Bedini. The extinction map of the orion molecular cloud / G. Scandariato (best student's paper), I. Pagano, M. Robberto -- pt. B. System: structure and behaviour. Common grounds: the role of perception in science and the nature of transitions / G. Bernroider. Looking the world from inside: intrinsic geometry of complex systems / L. Boi. The butterfly and the photon: new perspectives on unpredictability, and the notion of casual reality, in quantum physics / T. N. Palmer. Self-replicated wave patterns in neural networks with complex threshold / V. I. Nekorkin. A local explication of causation / G. Boniolo, R. Faraldo, A. Saggion. Evolving complexity, cognition, and consciousness / H. Liljenstrom. Self-assembly, modularity and physical complexity / S. E. Ahnert. The category of topological thermodynamics / R. M. Kiehn. Anti-phase spiking patterns / M. P. Igaev, A. S. Dmitrichev, V. I. Nekorkin -- pt. C. Data/system representation. Reality, models and representations: the case of galaxies, intelligence and avatars / J-C. Heudin. Astronomical images and data mining in the international virtual observatory context / F. Pasian, M. Brescia, G. Longo. Dame: a web oriented infrastructure for scientific data mining and exploration / S. Cavuoti ... [et al.]. Galactic phase spaces / D. Chakrabarty. From data to images: a shape based approach for fluorescence tomography / O. Dorn, K. E. Prieto. The influence of texture symmetry in marker pointing: experimenting with humans and algorithms / M. Cardaci, M. E. Tabacchi. A multiscale autocorrelation function for anisotropy studies / M. Scuderi ... [et al.]. A multiscale, lacunarity and neural network method for [symbol]/h discrimination in extensive air showers / A. Pagliaro, F. D'anna, G. D'ali Staiti. Bayesian semi-parametric curve-fitting and clustering in SDSS data / S. Mukkhopadhyay, S. Roy, S. Bhattacharya.

  20. Predicting overlapping protein complexes from weighted protein interaction graphs by gradually expanding dense neighborhoods.

    PubMed

    Dimitrakopoulos, Christos; Theofilatos, Konstantinos; Pegkas, Andreas; Likothanassis, Spiros; Mavroudi, Seferina

    2016-07-01

    Proteins are vital biological molecules driving many fundamental cellular processes. They rarely act alone, but form interacting groups called protein complexes. The study of protein complexes is a key goal in systems biology. Recently, large protein-protein interaction (PPI) datasets have been published and a plethora of computational methods that provide new ideas for the prediction of protein complexes have been implemented. However, most of the methods suffer from two major limitations: First, they do not account for proteins participating in multiple functions and second, they are unable to handle weighted PPI graphs. Moreover, the problem remains open as existing algorithms and tools are insufficient in terms of predictive metrics. In the present paper, we propose gradually expanding neighborhoods with adjustment (GENA), a new algorithm that gradually expands neighborhoods in a graph starting from highly informative "seed" nodes. GENA considers proteins as multifunctional molecules allowing them to participate in more than one protein complex. In addition, GENA accepts weighted PPI graphs by using a weighted evaluation function for each cluster. In experiments with datasets from Saccharomyces cerevisiae and human, GENA outperformed Markov clustering, restricted neighborhood search and clustering with overlapping neighborhood expansion, three state-of-the-art methods for computationally predicting protein complexes. Seven PPI networks and seven evaluation datasets were used in total. GENA outperformed existing methods in 16 out of 18 experiments achieving an average improvement of 5.5% when the maximum matching ratio metric was used. Our method was able to discover functionally homogeneous protein clusters and uncover important network modules in a Parkinson expression dataset. When used on the human networks, around 47% of the detected clusters were enriched in gene ontology (GO) terms with depth higher than five in the GO hierarchy. In the present manuscript, we introduce a new method for the computational prediction of protein complexes by making the realistic assumption that proteins participate in multiple protein complexes and cellular functions. Our method can detect accurate and functionally homogeneous clusters. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Hierarchical structure of the Sicilian goats revealed by Bayesian analyses of microsatellite information.

    PubMed

    Siwek, M; Finocchiaro, R; Curik, I; Portolano, B

    2011-02-01

    Genetic structure and relationship amongst the main goat populations in Sicily (Girgentana, Derivata di Siria, Maltese and Messinese) were analysed using information from 19 microsatellite markers genotyped on 173 individuals. A posterior Bayesian approach implemented in the program STRUCTURE revealed a hierarchical structure with two clusters at the first level (Girgentana vs. Messinese, Derivata di Siria and Maltese), explaining 4.8% of variation (amovaФ(ST) estimate). Seven clusters nested within these first two clusters (further differentiations of Girgentana, Derivata di Siria and Maltese), explaining 8.5% of variation (amovaФ(SC) estimate). The analyses and methods applied in this study indicate their power to detect subtle population structure. © 2010 The Authors, Animal Genetics © 2010 Stichting International Foundation for Animal Genetics.

  2. Graph characterization via Ihara coefficients.

    PubMed

    Ren, Peng; Wilson, Richard C; Hancock, Edwin R

    2011-02-01

    The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.

  3. Bayesian pedigree inference with small numbers of single nucleotide polymorphisms via a factor-graph representation.

    PubMed

    Anderson, Eric C; Ng, Thomas C

    2016-02-01

    We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.

  4. Software and package applicating for network meta-analysis: A usage-based comparative study.

    PubMed

    Xu, Chang; Niu, Yuming; Wu, Junyi; Gu, Huiyun; Zhang, Chao

    2017-12-21

    To compare and analyze the characteristics and functions of software applications for network meta-analysis (NMA). PubMed, EMbase, The Cochrane Library, the official websites of Bayesian inference Using Gibbs Sampling (BUGS), Stata and R, and Google were searched to collect the software and packages for performing NMA; software and packages published up to March 2016 were included. After collecting the software, packages, and their user guides, we used the software and packages to calculate a typical example. All characteristics, functions, and computed results were compared and analyzed. Ten types of software were included, including programming and non-programming software. They were developed mainly based on Bayesian or frequentist theory. Most types of software have the characteristics of easy operation, easy mastery, exact calculation, or excellent graphing. However, there was no single software that performed accurate calculations with superior graphing; this could only be achieved through the combination of two or more types of software. This study suggests that the user should choose the appropriate software according to personal programming basis, operational habits, and financial ability. Then, the choice of the combination of BUGS and R (or Stata) software to perform the NMA is considered. © 2017 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  5. Identifying Key Drivers of Return Reversal with Dynamical Bayesian Factor Graph.

    PubMed

    Zhao, Shuai; Tong, Yunhai; Wang, Zitian; Tan, Shaohua

    2016-01-01

    In the stock market, return reversal occurs when investors sell overbought stocks and buy oversold stocks, reversing the stocks' price trends. In this paper, we develop a new method to identify key drivers of return reversal by incorporating a comprehensive set of factors derived from different economic theories into one unified dynamical Bayesian factor graph. We then use the model to depict factor relationships and their dynamics, from which we make some interesting discoveries about the mechanism behind return reversals. Through extensive experiments on the US stock market, we conclude that among the various factors, the liquidity factors consistently emerge as key drivers of return reversal, which is in support of the theory of liquidity effect. Specifically, we find that stocks with high turnover rates or high Amihud illiquidity measures have a greater probability of experiencing return reversals. Apart from the consistent drivers, we find other drivers of return reversal that generally change from year to year, and they serve as important characteristics for evaluating the trends of stock returns. Besides, we also identify some seldom discussed yet enlightening inter-factor relationships, one of which shows that stocks in Finance and Insurance industry are more likely to have high Amihud illiquidity measures in comparison with those in other industries. These conclusions are robust for return reversals under different thresholds.

  6. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  7. Enhancing Community Detection By Affinity-based Edge Weighting Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Andy; Sanders, Geoffrey; Henson, Van

    Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is idealmore » for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.« less

  8. Graph edit distance from spectral seriation.

    PubMed

    Robles-Kelly, Antonio; Hancock, Edwin R

    2005-03-01

    This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.

  9. Modular analysis of the probabilistic genetic interaction network.

    PubMed

    Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting

    2011-03-15

    Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.

  10. The Beta Cell in Its Cluster: Stochastic Graphs of Beta Cell Connectivity in the Islets of Langerhans

    PubMed Central

    Striegel, Deborah A.; Hara, Manami; Periwal, Vipul

    2015-01-01

    Pancreatic islets of Langerhans consist of endocrine cells, primarily α, β and δ cells, which secrete glucagon, insulin, and somatostatin, respectively, to regulate plasma glucose. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Due to the central functional significance of this local connectivity in the placement of β cells in an islet, it is important to characterize it quantitatively. However, quantification of the seemingly stochastic cytoarchitecture of β cells in an islet requires mathematical methods that can capture topological connectivity in the entire β-cell population in an islet. Graph theory provides such a framework. Using large-scale imaging data for thousands of islets containing hundreds of thousands of cells in human organ donor pancreata, we show that quantitative graph characteristics differ between control and type 2 diabetic islets. Further insight into the processes that shape and maintain this architecture is obtained by formulating a stochastic theory of β-cell rearrangement in whole islets, just as the normal equilibrium distribution of the Ornstein-Uhlenbeck process can be viewed as the result of the interplay between a random walk and a linear restoring force. Requiring that rearrangements maintain the observed quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that β-cell rearrangement is dependent on its connectivity in order to maintain an optimal cluster size in both normal and T2D islets. PMID:26266953

  11. The Beta Cell in Its Cluster: Stochastic Graphs of Beta Cell Connectivity in the Islets of Langerhans.

    PubMed

    Striegel, Deborah A; Hara, Manami; Periwal, Vipul

    2015-08-01

    Pancreatic islets of Langerhans consist of endocrine cells, primarily α, β and δ cells, which secrete glucagon, insulin, and somatostatin, respectively, to regulate plasma glucose. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Due to the central functional significance of this local connectivity in the placement of β cells in an islet, it is important to characterize it quantitatively. However, quantification of the seemingly stochastic cytoarchitecture of β cells in an islet requires mathematical methods that can capture topological connectivity in the entire β-cell population in an islet. Graph theory provides such a framework. Using large-scale imaging data for thousands of islets containing hundreds of thousands of cells in human organ donor pancreata, we show that quantitative graph characteristics differ between control and type 2 diabetic islets. Further insight into the processes that shape and maintain this architecture is obtained by formulating a stochastic theory of β-cell rearrangement in whole islets, just as the normal equilibrium distribution of the Ornstein-Uhlenbeck process can be viewed as the result of the interplay between a random walk and a linear restoring force. Requiring that rearrangements maintain the observed quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that β-cell rearrangement is dependent on its connectivity in order to maintain an optimal cluster size in both normal and T2D islets.

  12. Assessment of Overlap of Phylogenetic Transmission Clusters and Communities in Simple Sexual Contact Networks: Applications to HIV-1

    PubMed Central

    Villandre, Luc; Günthard, Huldrych F.; Kouyos, Roger; Stadler, Tanja

    2016-01-01

    Background Transmission patterns of sexually-transmitted infections (STIs) could relate to the structure of the underlying sexual contact network, whose features are therefore of interest to clinicians. Conventionally, we represent sexual contacts in a population with a graph, that can reveal the existence of communities. Phylogenetic methods help infer the history of an epidemic and incidentally, may help detecting communities. In particular, phylogenetic analyses of HIV-1 epidemics among men who have sex with men (MSM) have revealed the existence of large transmission clusters, possibly resulting from within-community transmissions. Past studies have explored the association between contact networks and phylogenies, including transmission clusters, producing conflicting conclusions about whether network features significantly affect observed transmission history. As far as we know however, none of them thoroughly investigated the role of communities, defined with respect to the network graph, in the observation of clusters. Methods The present study investigates, through simulations, community detection from phylogenies. We simulate a large number of epidemics over both unweighted and weighted, undirected random interconnected-islands networks, with islands corresponding to communities. We use weighting to modulate distance between islands. We translate each epidemic into a phylogeny, that lets us partition our samples of infected subjects into transmission clusters, based on several common definitions from the literature. We measure similarity between subjects’ island membership indices and transmission cluster membership indices with the adjusted Rand index. Results and Conclusion Analyses reveal modest mean correspondence between communities in graphs and phylogenetic transmission clusters. We conclude that common methods often have limited success in detecting contact network communities from phylogenies. The rarely-fulfilled requirement that network communities correspond to clades in the phylogeny is their main drawback. Understanding the link between transmission clusters and communities in sexual contact networks could help inform policymaking to curb HIV incidence in MSMs. PMID:26863322

  13. Preserving Differential Privacy in Degree-Correlation based Graph Generation

    PubMed Central

    Wang, Yue; Wu, Xintao

    2014-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987

  14. Reduced graphs and their applications in chemoinformatics.

    PubMed

    Birchall, Kristian; Gillet, Valerie J

    2011-01-01

    Reduced graphs provide summary representations of chemical structures by collapsing groups of connected atoms into single nodes while preserving the topology of the original structures. This chapter reviews the extensive work that has been carried out on reduced graphs at The University of Sheffield and includes discussion of their application to the representation and search of Markush structures in patents, the varied approaches that have been implemented for similarity searching, their use in cluster representation, the different ways in which they have been applied to extract structure-activity relationships and their use in encoding bioisosteres.

  15. Anomaly clustering in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.

    2009-05-01

    The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.

  16. Weakly supervised image semantic segmentation based on clustering superpixels

    NASA Astrophysics Data System (ADS)

    Yan, Xiong; Liu, Xiaohua

    2018-04-01

    In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.

  17. Classification of ligand molecules in PDB with graph match-based structural superposition.

    PubMed

    Shionyu-Mitsuyama, Clara; Hijikata, Atsushi; Tsuji, Toshiyuki; Shirai, Tsuyoshi

    2016-12-01

    The fast heuristic graph match algorithm for small molecules, COMPLIG, was improved by adding a structural superposition process to verify the atom-atom matching. The modified method was used to classify the small molecule ligands in the Protein Data Bank (PDB) by their three-dimensional structures, and 16,660 types of ligands in the PDB were classified into 7561 clusters. In contrast, a classification by a previous method (without structure superposition) generated 3371 clusters from the same ligand set. The characteristic feature in the current classification system is the increased number of singleton clusters, which contained only one ligand molecule in a cluster. Inspections of the singletons in the current classification system but not in the previous one implied that the major factors for the isolation were differences in chirality, cyclic conformations, separation of substructures, and bond length. Comparisons between current and previous classification systems revealed that the superposition-based classification was effective in clustering functionally related ligands, such as drugs targeted to specific biological processes, owing to the strictness of the atom-atom matching.

  18. Sensor-based monitoring and inspection of surface morphology in ultraprecision manufacturing processes

    NASA Astrophysics Data System (ADS)

    Rao, Prahalad Krishna

    This research proposes approaches for monitoring and inspection of surface morphology with respect to two ultraprecision/nanomanufacturing processes, namely, ultraprecision machining (UPM) and chemical mechanical planarization (CMP). The methods illustrated in this dissertation are motivated from the compelling need for in situ process monitoring in nanomanufacturing and invoke concepts from diverse scientific backgrounds, such as artificial neural networks, Bayesian learning, and algebraic graph theory. From an engineering perspective, this work has the following contributions: 1. A combined neural network and Bayesian learning approach for early detection of UPM process anomalies by integrating data from multiple heterogeneous in situ sensors (force, vibration, and acoustic emission) is developed. The approach captures process drifts in UPM of aluminum 6061 discs within 15 milliseconds of their inception and is therefore valuable for minimizing yield losses. 2. CMP process dynamics are mathematically represented using a deterministic multi-scale hierarchical nonlinear differential equation model. This process-machine inter-action (PMI) model is evocative of the various physio-mechanical aspects in CMP and closely emulates experimentally acquired vibration signal patterns, including complex nonlinear dynamics manifest in the process. By combining the PMI model predictions with features gathered from wirelessly acquired CMP vibration signal patterns, CMP process anomalies, such as pad wear, and drifts in polishing were identified in their nascent stage with high fidelity (R2 ~ 75%). 3. An algebraic graph theoretic approach for quantifying nano-surface morphology from optical micrograph images is developed. The approach enables a parsimonious representation of the topological relationships between heterogeneous nano-surface fea-tures, which are enshrined in graph theoretic entities, namely, the similarity, degree, and Laplacian matrices. Topological invariant measures (e.g., Fiedler number, Kirchoff index) extracted from these matrices are shown to be sensitive to evolving nano-surface morphology. For instance, we observed that prominent nanoscale morphological changes on CMP processed Cu wafers, although discernible visually, could not be tractably quantified using statistical metrology parameters, such as arithmetic average roughness (Sa), root mean square roughness (Sq), etc. In contrast, CMP induced nanoscale surface variations were captured on invoking graph theoretic topological invariants. Consequently, the graph theoretic approach can enable timely, non-contact, and in situ metrology of semiconductor wafers by obviating the need for reticent profile mapping techniques (e.g., AFM, SEM, etc.), and thereby prevent the propagation of yield losses over long production runs.

  19. Fast graph-based relaxed clustering for large data sets using minimal enclosing ball.

    PubMed

    Qian, Pengjiang; Chung, Fu-Lai; Wang, Shitong; Deng, Zhaohong

    2012-06-01

    Although graph-based relaxed clustering (GRC) is one of the spectral clustering algorithms with straightforwardness and self-adaptability, it is sensitive to the parameters of the adopted similarity measure and also has high time complexity O(N(3)) which severely weakens its usefulness for large data sets. In order to overcome these shortcomings, after introducing certain constraints for GRC, an enhanced version of GRC [constrained GRC (CGRC)] is proposed to increase the robustness of GRC to the parameters of the adopted similarity measure, and accordingly, a novel algorithm called fast GRC (FGRC) based on CGRC is developed in this paper by using the core-set-based minimal enclosing ball approximation. A distinctive advantage of FGRC is that its asymptotic time complexity is linear with the data set size N. At the same time, FGRC also inherits the straightforwardness and self-adaptability from GRC, making the proposed FGRC a fast and effective clustering algorithm for large data sets. The advantages of FGRC are validated by various benchmarking and real data sets.

  20. A Technique of Two-Stage Clustering Applied to Environmental and Civil Engineering and Related Methods of Citation Analysis.

    ERIC Educational Resources Information Center

    Miyamoto, S.; Nakayama, K.

    1983-01-01

    A method of two-stage clustering of literature based on citation frequency is applied to 5,065 articles from 57 journals in environmental and civil engineering. Results of related methods of citation analysis (hierarchical graph, clustering of journals, multidimensional scaling) applied to same set of articles are compared. Ten references are…

  1. Graph-based segmentation for RGB-D data using 3-D geometry enhanced superpixels.

    PubMed

    Yang, Jingyu; Gan, Ziqiao; Li, Kun; Hou, Chunping

    2015-05-01

    With the advances of depth sensing technologies, color image plus depth information (referred to as RGB-D data hereafter) is more and more popular for comprehensive description of 3-D scenes. This paper proposes a two-stage segmentation method for RGB-D data: 1) oversegmentation by 3-D geometry enhanced superpixels and 2) graph-based merging with label cost from superpixels. In the oversegmentation stage, 3-D geometrical information is reconstructed from the depth map. Then, a K-means-like clustering method is applied to the RGB-D data for oversegmentation using an 8-D distance metric constructed from both color and 3-D geometrical information. In the merging stage, treating each superpixel as a node, a graph-based model is set up to relabel the superpixels into semantically-coherent segments. In the graph-based model, RGB-D proximity, texture similarity, and boundary continuity are incorporated into the smoothness term to exploit the correlations of neighboring superpixels. To obtain a compact labeling, the label term is designed to penalize labels linking to similar superpixels that likely belong to the same object. Both the proposed 3-D geometry enhanced superpixel clustering method and the graph-based merging method from superpixels are evaluated by qualitative and quantitative results. By the fusion of color and depth information, the proposed method achieves superior segmentation performance over several state-of-the-art algorithms.

  2. Change of Brain Functional Connectivity in Patients With Spinal Cord Injury: Graph Theory Based Approach.

    PubMed

    Min, Yu-Sun; Chang, Yongmin; Park, Jang Woo; Lee, Jong-Min; Cha, Jungho; Yang, Jin-Ju; Kim, Chul-Hyun; Hwang, Jong-Moon; Yoo, Ji-Na; Jung, Tae-Du

    2015-06-01

    To investigate the global functional reorganization of the brain following spinal cord injury with graph theory based approach by creating whole brain functional connectivity networks from resting state-functional magnetic resonance imaging (rs-fMRI), characterizing the reorganization of these networks using graph theoretical metrics and to compare these metrics between patients with spinal cord injury (SCI) and age-matched controls. Twenty patients with incomplete cervical SCI (14 males, 6 females; age, 55±14.1 years) and 20 healthy subjects (10 males, 10 females; age, 52.9±13.6 years) participated in this study. To analyze the characteristics of the whole brain network constructed with functional connectivity using rs-fMRI, graph theoretical measures were calculated including clustering coefficient, characteristic path length, global efficiency and small-worldness. Clustering coefficient, global efficiency and small-worldness did not show any difference between controls and SCIs in all density ranges. The normalized characteristic path length to random network was higher in SCI patients than in controls and reached statistical significance at 12%-13% of density (p<0.05, uncorrected). The graph theoretical approach in brain functional connectivity might be helpful to reveal the information processing after SCI. These findings imply that patients with SCI can build on preserved competent brain control. Further analyses, such as topological rearrangement and hub region identification, will be needed for better understanding of neuroplasticity in patients with SCI.

  3. Generation of Individual Whole-Brain Atlases With Resting-State fMRI Data Using Simultaneous Graph Computation and Parcellation.

    PubMed

    Wang, J; Hao, Z; Wang, H

    2018-01-01

    The human brain can be characterized as functional networks. Therefore, it is important to subdivide the brain appropriately in order to construct reliable networks. Resting-state functional connectivity-based parcellation is a commonly used technique to fulfill this goal. Here we propose a novel individual subject-level parcellation approach based on whole-brain resting-state functional magnetic resonance imaging (fMRI) data. We first used a supervoxel method known as simple linear iterative clustering directly on resting-state fMRI time series to generate supervoxels, and then combined similar supervoxels to generate clusters using a clustering method known as graph-without-cut (GWC). The GWC approach incorporates spatial information and multiple features of the supervoxels by energy minimization, simultaneously yielding an optimal graph and brain parcellation. Meanwhile, it theoretically guarantees that the actual cluster number is exactly equal to the initialized cluster number. By comparing the results of the GWC approach and those of the random GWC approach, we demonstrated that GWC does not rely heavily on spatial structures, thus avoiding the challenges encountered in some previous whole-brain parcellation approaches. In addition, by comparing the GWC approach to two competing approaches, we showed that GWC achieved better parcellation performances in terms of different evaluation metrics. The proposed approach can be used to generate individualized brain atlases for applications related to cognition, development, aging, disease, personalized medicine, etc. The major source codes of this study have been made publicly available at https://github.com/yuzhounh/GWC.

  4. A Poisson nonnegative matrix factorization method with parameter subspace clustering constraint for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei

    2017-06-01

    A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagberg, Aric; Swart, Pieter; S Chult, Daniel

    NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility mades NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distributionmore » and many more. NetworkX can read and write various graph formats for eash exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdoes-Renyi, Small World, and Barabasi-Albert models, are included. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks.« less

  6. Efficient Extraction of High Centrality Vertices in Distributed Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Frincu, Marc; Raghavendra, Cauligi S.

    2014-09-09

    Betweenness centrality (BC) is an important measure for identifying high value or critical vertices in graphs, in variety of domains such as communication networks, road networks, and social graphs. However, calculating betweenness values is prohibitively expensive and, more often, domain experts are interested only in the vertices with the highest centrality values. In this paper, we first propose a partition-centric algorithm (MS-BC) to calculate BC for a large distributed graph that optimizes resource utilization and improves overall performance. Further, we extend the notion of approximate BC by pruning the graph and removing a subset of edges and vertices that contributemore » the least to the betweenness values of other vertices (MSL-BC), which further improves the runtime performance. We evaluate the proposed algorithms using a mix of real-world and synthetic graphs on an HPC cluster and analyze its strengths and weaknesses. The experimental results show an improvement in performance of upto 12x for large sparse graphs as compared to the state-of-the-art, and at the same time highlights the need for better partitioning methods to enable a balanced workload across partitions for unbalanced graphs such as small-world or power-law graphs.« less

  7. MISAGA: An Algorithm for Mining Interesting Subgraphs in Attributed Graphs.

    PubMed

    He, Tiantian; Chan, Keith C C

    2018-05-01

    An attributed graph contains vertices that are associated with a set of attribute values. Mining clusters or communities, which are interesting subgraphs in the attributed graph is one of the most important tasks of graph analytics. Many problems can be defined as the mining of interesting subgraphs in attributed graphs. Algorithms that discover subgraphs based on predefined topologies cannot be used to tackle these problems. To discover interesting subgraphs in the attributed graph, we propose an algorithm called mining interesting subgraphs in attributed graph algorithm (MISAGA). MISAGA performs its tasks by first using a probabilistic measure to determine whether the strength of association between a pair of attribute values is strong enough to be interesting. Given the interesting pairs of attribute values, then the degree of association is computed for each pair of vertices using an information theoretic measure. Based on the edge structure and degree of association between each pair of vertices, MISAGA identifies interesting subgraphs by formulating it as a constrained optimization problem and solves it by identifying the optimal affiliation of subgraphs for the vertices in the attributed graph. MISAGA has been tested with several large-sized real graphs and is found to be potentially very useful for various applications.

  8. Small-world bias of correlation networks: From brain to climate

    NASA Astrophysics Data System (ADS)

    Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan

    2017-03-01

    Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.

  9. Small-world bias of correlation networks: From brain to climate.

    PubMed

    Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan

    2017-03-01

    Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.

  10. Detection of protein complex from protein-protein interaction network using Markov clustering

    NASA Astrophysics Data System (ADS)

    Ochieng, P. J.; Kusuma, W. A.; Haryanto, T.

    2017-05-01

    Detection of complexes, or groups of functionally related proteins, is an important challenge while analysing biological networks. However, existing algorithms to identify protein complexes are insufficient when applied to dense networks of experimentally derived interaction data. Therefore, we introduced a graph clustering method based on Markov clustering algorithm to identify protein complex within highly interconnected protein-protein interaction networks. Protein-protein interaction network was first constructed to develop geometrical network, the network was then partitioned using Markov clustering to detect protein complexes. The interest of the proposed method was illustrated by its application to Human Proteins associated to type II diabetes mellitus. Flow simulation of MCL algorithm was initially performed and topological properties of the resultant network were analysed for detection of the protein complex. The results indicated the proposed method successfully detect an overall of 34 complexes with 11 complexes consisting of overlapping modules and 20 non-overlapping modules. The major complex consisted of 102 proteins and 521 interactions with cluster modularity and density of 0.745 and 0.101 respectively. The comparison analysis revealed MCL out perform AP, MCODE and SCPS algorithms with high clustering coefficient (0.751) network density and modularity index (0.630). This demonstrated MCL was the most reliable and efficient graph clustering algorithm for detection of protein complexes from PPI networks.

  11. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  12. Bipartite graphs as models of population structures in evolutionary multiplayer games.

    PubMed

    Peña, Jorge; Rochat, Yannick

    2012-01-01

    By combining evolutionary game theory and graph theory, "games on graphs" study the evolutionary dynamics of frequency-dependent selection in population structures modeled as geographical or social networks. Networks are usually represented by means of unipartite graphs, and social interactions by two-person games such as the famous prisoner's dilemma. Unipartite graphs have also been used for modeling interactions going beyond pairwise interactions. In this paper, we argue that bipartite graphs are a better alternative to unipartite graphs for describing population structures in evolutionary multiplayer games. To illustrate this point, we make use of bipartite graphs to investigate, by means of computer simulations, the evolution of cooperation under the conventional and the distributed N-person prisoner's dilemma. We show that several implicit assumptions arising from the standard approach based on unipartite graphs (such as the definition of replacement neighborhoods, the intertwining of individual and group diversity, and the large overlap of interaction neighborhoods) can have a large impact on the resulting evolutionary dynamics. Our work provides a clear example of the importance of construction procedures in games on graphs, of the suitability of bigraphs and hypergraphs for computational modeling, and of the importance of concepts from social network analysis such as centrality, centralization and bipartite clustering for the understanding of dynamical processes occurring on networked population structures.

  13. Phase transition in the parametric natural visibility graph.

    PubMed

    Snarskii, A A; Bezsudnov, I V

    2016-10-01

    We investigate time series by mapping them to the complex networks using a parametric natural visibility graph (PNVG) algorithm that generates graphs depending on arbitrary continuous parameter-the angle of view. We study the behavior of the relative number of clusters in PNVG near the critical value of the angle of view. Artificial and experimental time series of different nature are used for numerical PNVG investigations to find critical exponents above and below the critical point as well as the exponent in the finite size scaling regime. Altogether, they allow us to find the critical exponent of the correlation length for PNVG. The set of calculated critical exponents satisfies the basic Widom relation. The PNVG is found to demonstrate scaling behavior. Our results reveal the similarity between the behavior of the relative number of clusters in PNVG and the order parameter in the second-order phase transitions theory. We show that the PNVG is another example of a system (in addition to magnetic, percolation, superconductivity, etc.) with observed second-order phase transition.

  14. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  15. Two-Phase and Graph-Based Clustering Methods for Accurate and Efficient Segmentation of Large Mass Spectrometry Images.

    PubMed

    Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine

    2017-11-07

    Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.

  16. Verification of Bayesian Clustering in Travel Behaviour Research – First Step to Macroanalysis of Travel Behaviour

    NASA Astrophysics Data System (ADS)

    Satra, P.; Carsky, J.

    2018-04-01

    Our research is looking at the travel behaviour from a macroscopic view, taking one municipality as a basic unit. The travel behaviour of one municipality as a whole is becoming one piece of a data in the research of travel behaviour of a larger area, perhaps a country. A data pre-processing is used to cluster the municipalities in groups, which show similarities in their travel behaviour. Such groups can be then researched for reasons of their prevailing pattern of travel behaviour without any distortion caused by municipalities with a different pattern. This paper deals with actual settings of the clustering process, which is based on Bayesian statistics, particularly the mixture model. An optimization of the settings parameters based on correlation of pointer model parameters and relative number of data in clusters is helpful, however not fully reliable method. Thus, method for graphic representation of clusters needs to be developed in order to check their quality. A training of the setting parameters in 2D has proven to be a beneficial method, because it allows visual control of the produced clusters. The clustering better be applied on separate groups of municipalities, where competition of only identical transport modes can be found.

  17. Application of bayesian networks to real-time flood risk estimation

    NASA Astrophysics Data System (ADS)

    Garrote, L.; Molina, M.; Blasco, G.

    2003-04-01

    This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models

  18. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    NASA Astrophysics Data System (ADS)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  19. Robust Bayesian hypocentre and uncertainty region estimation: the effect of heavy-tailed distributions and prior information in cases with poor, inconsistent and insufficient arrival times

    NASA Astrophysics Data System (ADS)

    Martinsson, J.

    2013-03-01

    We propose methods for robust Bayesian inference of the hypocentre in presence of poor, inconsistent and insufficient phase arrival times. The objectives are to increase the robustness, the accuracy and the precision by introducing heavy-tailed distributions and an informative prior distribution of the seismicity. The effects of the proposed distributions are studied under real measurement conditions in two underground mine networks and validated using 53 blasts with known hypocentres. To increase the robustness against poor, inconsistent or insufficient arrivals, a Gaussian Mixture Model is used as a hypocentre prior distribution to describe the seismically active areas, where the parameters are estimated based on previously located events in the region. The prior is truncated to constrain the solution to valid geometries, for example below the ground surface, excluding known cavities, voids and fractured zones. To reduce the sensitivity to outliers, different heavy-tailed distributions are evaluated to model the likelihood distribution of the arrivals given the hypocentre and the origin time. Among these distributions, the multivariate t-distribution is shown to produce the overall best performance, where the tail-mass adapts to the observed data. Hypocentre and uncertainty region estimates are based on simulations from the posterior distribution using Markov Chain Monte Carlo techniques. Velocity graphs (equivalent to traveltime graphs) are estimated using blasts from known locations, and applied to reduce the main uncertainties and thereby the final estimation error. To focus on the behaviour and the performance of the proposed distributions, a basic single-event Bayesian procedure is considered in this study for clarity. Estimation results are shown with different distributions, with and without prior distribution of seismicity, with wrong prior distribution, with and without error compensation, with and without error description, with insufficient arrival times and in presence of significant outliers. A particular focus is on visual results and comparisons to give a better understanding of the Bayesian advantage and to show the effects of heavy-tailed distributions and informative prior information on real data.

  20. Unimodular lattice triangulations as small-world and scale-free random graphs

    NASA Astrophysics Data System (ADS)

    Krüger, B.; Schmidt, E. M.; Mecke, K.

    2015-02-01

    Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.

  1. Critical percolation clusters in seven dimensions and on a complete graph

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Hou, Pengcheng; Wang, Junfeng; Ziff, Robert M.; Deng, Youjin

    2018-02-01

    We study critical bond percolation on a seven-dimensional hypercubic lattice with periodic boundary conditions (7D) and on the complete graph (CG) of finite volume (number of vertices) V . We numerically confirm that for both cases, the critical number density n (s ,V ) of clusters of size s obeys a scaling form n (s ,V ) ˜s-τn ˜(s /Vdf*) with identical volume fractal dimension df*=2 /3 and exponent τ =1 +1 /df*=5 /2 . We then classify occupied bonds into bridge bonds, which includes branch and junction bonds, and nonbridge bonds; a bridge bond is a branch bond if and only if its deletion produces at least one tree. Deleting branch bonds from percolation configurations produces leaf-free configurations, whereas deleting all bridge bonds leads to bridge-free configurations composed of blobs. It is shown that the fraction of nonbridge (biconnected) bonds vanishes, ρn ,CG→0 , for large CGs, but converges to a finite value, ρn ,7 D=0.006 193 1 (7 ) , for the 7D hypercube. Further, we observe that while the bridge-free dimension dbf*=1 /3 holds for both the CG and 7D cases, the volume fractal dimensions of the leaf-free clusters are different: dlf,7 D *=0.669 (9 ) ≈2 /3 and dlf,CG *=0.3337 (17 ) ≈1 /3 . On the CG and in 7D, the whole, leaf-free, and bridge-free clusters all have the shortest-path volume fractal dimension dmin*≈1 /3 , characterizing their graph diameters. We also study the behavior of the number and the size distribution of leaf-free and bridge-free clusters. For the number of clusters, we numerically find the number of leaf-free and bridge-free clusters on the CG scale as ˜lnV , while for 7D they scale as ˜V . For the size distribution, we find the behavior on the CG is governed by a modified Fisher exponent τ'=1 , while for leaf-free clusters in 7D, it is governed by Fisher exponent τ =5 /2 . The size distribution of bridge-free clusters in 7D displays two-scaling behavior with exponents τ =4 and τ'=1 . The probability distribution P (C1,V ) d C1 of the largest cluster of size C1 for whole percolation configurations is observed to follow a single-variable function P ¯(x ) d x , with x ≡C1/Vdf* for both CG and 7D. Up to a rescaling factor for the variable x , the probability functions for CG and 7D collapse on top of each other within the entire range of x . The analytical expressions in the x →0 and x →∞ limits are further confirmed. Our work demonstrates that the geometric structure of high-dimensional percolation clusters cannot be fully accounted for by their complete-graph counterparts.

  2. Perron-Frobenius theorem on the superfluid transition of an ultracold Fermi gas

    NASA Astrophysics Data System (ADS)

    Sakumichi, Naoyuki; Kawakami, Norio; Ueda, Masahito

    2014-05-01

    The Perron-Frobenius theorem is applied to identify the superfluid transition of the BCS-BEC crossover based on a cluster expansion method of Lee and Yang. Here, the cluster expansion is a systematic expansion of the equation of state (EOS) in terms of the fugacity z = exp (βμ) as βpλ3 = 2 z +b2z2 +b3z3 + ⋯ , with inverse temperature β =(kB T) - 1 , chemical potential μ, pressure p, and thermal de Broglie length λ =(2 πℏβ / m) 1 / 2 . According to the method of Lee and Yang, EOS is expressed by the Lee-Yang graphs. A singularity of an infinite series of ladder-type Lee-Yang graphs is analyzed. We point out that the singularity is governed by the Perron-Frobenius eigenvalue of a certain primitive matrix which is defined in terms of the two-body cluster functions and the Fermi distribution functions. As a consequence, it is found that there exists a unique fugacity at the phase transition point, which implies that there is no fragmentation of Bose-Einstein condensates of dimers and Cooper pairs at the ladder-approximation level of Lee-Yang graphs. An application to a BEC of strongly bounded dimers is also made.

  3. - and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.

    2017-05-01

    Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.

  4. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  5. A Bayesian cluster analysis method for single-molecule localization microscopy data.

    PubMed

    Griffié, Juliette; Shannon, Michael; Bromley, Claire L; Boelen, Lies; Burn, Garth L; Williamson, David J; Heard, Nicholas A; Cope, Andrew P; Owen, Dylan M; Rubin-Delanchy, Patrick

    2016-12-01

    Cell function is regulated by the spatiotemporal organization of the signaling machinery, and a key facet of this is molecular clustering. Here, we present a protocol for the analysis of clustering in data generated by 2D single-molecule localization microscopy (SMLM)-for example, photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM). Three features of such data can cause standard cluster analysis approaches to be ineffective: (i) the data take the form of a list of points rather than a pixel array; (ii) there is a non-negligible unclustered background density of points that must be accounted for; and (iii) each localization has an associated uncertainty in regard to its position. These issues are overcome using a Bayesian, model-based approach. Many possible cluster configurations are proposed and scored against a generative model, which assumes Gaussian clusters overlaid on a completely spatially random (CSR) background, before every point is scrambled by its localization precision. We present the process of generating simulated and experimental data that are suitable to our algorithm, the analysis itself, and the extraction and interpretation of key cluster descriptors such as the number of clusters, cluster radii and the number of localizations per cluster. Variations in these descriptors can be interpreted as arising from changes in the organization of the cellular nanoarchitecture. The protocol requires no specific programming ability, and the processing time for one data set, typically containing 30 regions of interest, is ∼18 h; user input takes ∼1 h.

  6. Review on Graph Clustering and Subgraph Similarity Based Analysis of Neurological Disorders

    PubMed Central

    Thomas, Jaya; Seo, Dongmin; Sael, Lee

    2016-01-01

    How can complex relationships among molecular or clinico-pathological entities of neurological disorders be represented and analyzed? Graphs seem to be the current answer to the question no matter the type of information: molecular data, brain images or neural signals. We review a wide spectrum of graph representation and graph analysis methods and their application in the study of both the genomic level and the phenotypic level of the neurological disorder. We find numerous research works that create, process and analyze graphs formed from one or a few data types to gain an understanding of specific aspects of the neurological disorders. Furthermore, with the increasing number of data of various types becoming available for neurological disorders, we find that integrative analysis approaches that combine several types of data are being recognized as a way to gain a global understanding of the diseases. Although there are still not many integrative analyses of graphs due to the complexity in analysis, multi-layer graph analysis is a promising framework that can incorporate various data types. We describe and discuss the benefits of the multi-layer graph framework for studies of neurological disease. PMID:27258269

  7. Review on Graph Clustering and Subgraph Similarity Based Analysis of Neurological Disorders.

    PubMed

    Thomas, Jaya; Seo, Dongmin; Sael, Lee

    2016-06-01

    How can complex relationships among molecular or clinico-pathological entities of neurological disorders be represented and analyzed? Graphs seem to be the current answer to the question no matter the type of information: molecular data, brain images or neural signals. We review a wide spectrum of graph representation and graph analysis methods and their application in the study of both the genomic level and the phenotypic level of the neurological disorder. We find numerous research works that create, process and analyze graphs formed from one or a few data types to gain an understanding of specific aspects of the neurological disorders. Furthermore, with the increasing number of data of various types becoming available for neurological disorders, we find that integrative analysis approaches that combine several types of data are being recognized as a way to gain a global understanding of the diseases. Although there are still not many integrative analyses of graphs due to the complexity in analysis, multi-layer graph analysis is a promising framework that can incorporate various data types. We describe and discuss the benefits of the multi-layer graph framework for studies of neurological disease.

  8. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786

  9. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  10. Bayesian network prior: network analysis of biological data using external knowledge

    PubMed Central

    Isci, Senol; Dogan, Haluk; Ozturk, Cengizhan; Otu, Hasan H.

    2014-01-01

    Motivation: Reverse engineering GI networks from experimental data is a challenging task due to the complex nature of the networks and the noise inherent in the data. One way to overcome these hurdles would be incorporating the vast amounts of external biological knowledge when building interaction networks. We propose a framework where GI networks are learned from experimental data using Bayesian networks (BNs) and the incorporation of external knowledge is also done via a BN that we call Bayesian Network Prior (BNP). BNP depicts the relation between various evidence types that contribute to the event ‘gene interaction’ and is used to calculate the probability of a candidate graph (G) in the structure learning process. Results: Our simulation results on synthetic, simulated and real biological data show that the proposed approach can identify the underlying interaction network with high accuracy even when the prior information is distorted and outperforms existing methods. Availability: Accompanying BNP software package is freely available for academic use at http://bioe.bilgi.edu.tr/BNP. Contact: hasan.otu@bilgi.edu.tr Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24215027

  11. Identifying Key Drivers of Return Reversal with Dynamical Bayesian Factor Graph

    PubMed Central

    Zhao, Shuai; Tong, Yunhai; Wang, Zitian; Tan, Shaohua

    2016-01-01

    In the stock market, return reversal occurs when investors sell overbought stocks and buy oversold stocks, reversing the stocks’ price trends. In this paper, we develop a new method to identify key drivers of return reversal by incorporating a comprehensive set of factors derived from different economic theories into one unified dynamical Bayesian factor graph. We then use the model to depict factor relationships and their dynamics, from which we make some interesting discoveries about the mechanism behind return reversals. Through extensive experiments on the US stock market, we conclude that among the various factors, the liquidity factors consistently emerge as key drivers of return reversal, which is in support of the theory of liquidity effect. Specifically, we find that stocks with high turnover rates or high Amihud illiquidity measures have a greater probability of experiencing return reversals. Apart from the consistent drivers, we find other drivers of return reversal that generally change from year to year, and they serve as important characteristics for evaluating the trends of stock returns. Besides, we also identify some seldom discussed yet enlightening inter-factor relationships, one of which shows that stocks in Finance and Insurance industry are more likely to have high Amihud illiquidity measures in comparison with those in other industries. These conclusions are robust for return reversals under different thresholds. PMID:27893780

  12. Data-driven confounder selection via Markov and Bayesian networks.

    PubMed

    Häggström, Jenny

    2018-06-01

    To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.

  13. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    PubMed

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  14. Evaluating Spatial Variability in Sediment and Phosphorus Concentration-Discharge Relationships Using Bayesian Inference and Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.

    2017-12-01

    Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.

  15. Network reconstruction via graph blending

    NASA Astrophysics Data System (ADS)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  16. An AI-based communication system for motor and speech disabled persons: design methodology and prototype testing.

    PubMed

    Sy, B K; Deller, J R

    1989-05-01

    An intelligent communication device is developed to assist the nonverbal, motor disabled in the generation of written and spoken messages. The device is centered on a knowledge base of the grammatical rules and message elements. A "belief" reasoning scheme based on both the information from external sources and the embedded knowledge is used to optimize the process of message search. The search for the message elements is conceptualized as a path search in the language graph, and a special frame architecture is used to construct and to partition the graph. Bayesian "belief" reasoning from the Dempster-Shafer theory of evidence is augmented to cope with time-varying evidence. An "information fusion" strategy is also introduced to integrate various forms of external information. Experimental testing of the prototype system is discussed.

  17. A Clustering Graph Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey

    In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps tomore » understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.« less

  18. Model validation of simple-graph representations of metabolism

    PubMed Central

    Holme, Petter

    2009-01-01

    The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012

  19. Bayesian performance metrics and small system integration in recent homeland security and defense applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas

    2010-04-01

    In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.

  20. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less

  1. Evolutionary Games of Multiplayer Cooperation on Graphs

    PubMed Central

    Arranz, Jordi; Traulsen, Arne

    2016-01-01

    There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946

  2. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    PubMed

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  3. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  4. Bayesian Analysis and Characterization of Multiple Populations in Galactic Globular Clusters

    NASA Astrophysics Data System (ADS)

    Wagner-Kaiser, Rachel A.; Stenning, David; Sarajedini, Ata; von Hippel, Ted; van Dyk, David A.; Robinson, Elliot; Stein, Nathan; Jefferys, William H.; BASE-9, HST UVIS Globular Cluster Treasury Program

    2017-01-01

    Globular clusters have long been important tools to unlock the early history of galaxies. Thus, it is crucial we understand the formation and characteristics of the globular clusters (GCs) themselves. Historically, GCs were thought to be simple and largely homogeneous populations, formed via collapse of a single molecular cloud. However, this classical view has been overwhelmingly invalidated by recent work. It is now clear that the vast majority of globular clusters in our Galaxy host two or more chemically distinct populations of stars, with variations in helium and light elements at discrete abundance levels. No coherent story has arisen that is able to fully explain the formation of multiple populations in globular clusters nor the mechanisms that drive stochastic variations from cluster to cluster.We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of 30 Galactic Globular Clusters to characterize two distinct stellar populations. A sophisticated Bayesian technique is employed to simultaneously sample the joint posterior distribution of age, distance, and extinction for each cluster, as well as unique helium values for two populations within each cluster and the relative proportion of those populations. We find the helium differences among the two populations in the clusters fall in the range of 0.04 to 0.11. Because adequate models varying in CNO are not presently available, we view these spreads as upper limits and present them with statistical rather than observational uncertainties. Evidence supports previous studies suggesting an increase in helium content concurrent with increasing mass of the cluster. We also find that the proportion of the first population of stars increases with mass. Our results are examined in the context of proposed globular cluster formation scenarios.

  5. The complex network of the Brazilian Popular Music

    NASA Astrophysics Data System (ADS)

    de Lima e Silva, D.; Medeiros Soares, M.; Henriques, M. V. C.; Schivani Alves, M. T.; de Aguiar, S. G.; de Carvalho, T. P.; Corso, G.; Lucena, L. S.

    2004-02-01

    We study the Brazilian Popular Music in a network perspective. We call the Brazilian Popular Music Network, BPMN, the graph where the vertices are the song writers and the links are determined by the existence of at least a common singer. The linking degree distribution of such graph shows power law and exponential regions. The exponent of the power law is compatible with the values obtained by the evolving network algorithms seen in the literature. The average path length of the BPMN is similar to the correspondent random graph, its clustering coefficient, however, is significantly larger. These results indicate that the BPMN forms a small-world network.

  6. Graph-based biomedical text summarization: An itemset mining and sentence clustering approach.

    PubMed

    Nasr Azadani, Mozhgan; Ghadiri, Nasser; Davoodijam, Ensieh

    2018-06-12

    Automatic text summarization offers an efficient solution to access the ever-growing amounts of both scientific and clinical literature in the biomedical domain by summarizing the source documents while maintaining their most informative contents. In this paper, we propose a novel graph-based summarization method that takes advantage of the domain-specific knowledge and a well-established data mining technique called frequent itemset mining. Our summarizer exploits the Unified Medical Language System (UMLS) to construct a concept-based model of the source document and mapping the document to the concepts. Then, it discovers frequent itemsets to take the correlations among multiple concepts into account. The method uses these correlations to propose a similarity function based on which a represented graph is constructed. The summarizer then employs a minimum spanning tree based clustering algorithm to discover various subthemes of the document. Eventually, it generates the final summary by selecting the most informative and relative sentences from all subthemes within the text. We perform an automatic evaluation over a large number of summaries using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics. The results demonstrate that the proposed summarization system outperforms various baselines and benchmark approaches. The carried out research suggests that the incorporation of domain-specific knowledge and frequent itemset mining equips the summarization system in a better way to address the informativeness measurement of the sentences. Moreover, clustering the graph nodes (sentences) can enable the summarizer to target different main subthemes of a source document efficiently. The evaluation results show that the proposed approach can significantly improve the performance of the summarization systems in the biomedical domain. Copyright © 2018. Published by Elsevier Inc.

  7. Overlapping communities from dense disjoint and high total degree clusters

    NASA Astrophysics Data System (ADS)

    Zhang, Hongli; Gao, Yang; Zhang, Yue

    2018-04-01

    Community plays an important role in the field of sociology, biology and especially in domains of computer science, where systems are often represented as networks. And community detection is of great importance in the domains. A community is a dense subgraph of the whole graph with more links between its members than between its members to the outside nodes, and nodes in the same community probably share common properties or play similar roles in the graph. Communities overlap when nodes in a graph belong to multiple communities. A vast variety of overlapping community detection methods have been proposed in the literature, and the local expansion method is one of the most successful techniques dealing with large networks. The paper presents a density-based seeding method, in which dense disjoint local clusters are searched and selected as seeds. The proposed method selects a seed by the total degree and density of local clusters utilizing merely local structures of the network. Furthermore, this paper proposes a novel community refining phase via minimizing the conductance of each community, through which the quality of identified communities is largely improved in linear time. Experimental results in synthetic networks show that the proposed seeding method outperforms other seeding methods in the state of the art and the proposed refining method largely enhances the quality of the identified communities. Experimental results in real graphs with ground-truth communities show that the proposed approach outperforms other state of the art overlapping community detection algorithms, in particular, it is more than two orders of magnitude faster than the existing global algorithms with higher quality, and it obtains much more accurate community structure than the current local algorithms without any priori information.

  8. Distributed Sensing and Processing Adaptive Collaboration Environment (D-SPACE)

    DTIC Science & Technology

    2014-07-01

    to the query graph, or subgraph permutations with the same mismatch cost (often the case for homogeneous and/or symmetrical data/query). To avoid...decisions are generated in a bottom-up manner using the metric of entropy at the cluster level (Figure 9c). Using the definition of belief messages...for a cluster and a set of data nodes in this cluster , we compute the entropy for forward and backward messages as (,) = −∑ (

  9. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    PubMed

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima

    2015-05-15

    Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cellmore » lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0.77), mean heart dose (0.69), and a pre-to-midtreatment change in ACE (0.66). When RP prediction was made only with pretreatment information, the AUC ranged from 0.76 to 0.81 depending on the ensemble size. Bootstrap validation of graph features in the ensemble quantified confidence of association between variables in the graphs where ten interactions were statistically significant. Conclusions: The presented BN methodology provides the flexibility to model hierarchical interactions between RP covariates, which is applied to probabilistic inference on RP. The authors’ preliminary results demonstrate that such framework combined with an ensemble method can possibly improve prediction of RP under real-life clinical circumstances such as missing data or treatment plan adaptation.« less

  11. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Ybarra, N; Jeyaseelan, K

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were alsomore » included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between RP covariates and its potential to improve RP prediction accuracy. Our Bayesian approach to incorporate prior knowledge can enhance efficiency in searching of such associations from data. The authors acknowledge partial support by: 1) CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290) and 2) The Terry Fox Foundation Strategic Training Initiative for Excellence in Radiation Research for the 21st Century (EIRR21)« less

  12. Menopause and big data: Word Adjacency Graph modeling of menopause-related ChaCha data.

    PubMed

    Carpenter, Janet S; Groves, Doyle; Chen, Chen X; Otte, Julie L; Miller, Wendy R

    2017-07-01

    To detect and visualize salient queries about menopause using Big Data from ChaCha. We used Word Adjacency Graph (WAG) modeling to detect clusters and visualize the range of menopause-related topics and their mutual proximity. The subset of relevant queries was fully modeled. We split each query into token words (ie, meaningful words and phrases) and removed stopwords (ie, not meaningful functional words). The remaining words were considered in sequence to build summary tables of words and two and three-word phrases. Phrases occurring at least 10 times were used to build a network graph model that was iteratively refined by observing and removing clusters of unrelated content. We identified two menopause-related subsets of queries by searching for questions containing menopause and menopause-related terms (eg, climacteric, hot flashes, night sweats, hormone replacement). The first contained 263,363 queries from individuals aged 13 and older and the second contained 5,892 queries from women aged 40 to 62 years. In the first set, we identified 12 topic clusters: 6 relevant to menopause and 6 less relevant. In the second set, we identified 15 topic clusters: 11 relevant to menopause and 4 less relevant. Queries about hormones were pervasive within both WAG models. Many of the queries reflected low literacy levels and/or feelings of embarrassment. We modeled menopause-related queries posed by ChaCha users between 2009 and 2012. ChaCha data may be used on its own or in combination with other Big Data sources to identify patient-driven educational needs and create patient-centered interventions.

  13. Data Mining Meets HCI: Making Sense of Large Graphs

    DTIC Science & Technology

    2012-07-01

    graph algo- rithms, won the Open Source Software World Challenge, Silver Award. We have released Pegasus as free , open-source software, downloaded by...METIS [77], spectral clustering [108], and the parameter- free “Cross-associations” (CA) [26]. Belief Propagation can also be used for clus- tering, as...number of tools have been developed to support “ landscape ” views of information. These include WebBook and Web- Forager [23], which use a book metaphor

  14. Why so GLUMM? Detecting depression clusters through graphing lifestyle-environs using machine-learning methods (GLUMM).

    PubMed

    Dipnall, J F; Pasco, J A; Berk, M; Williams, L J; Dodd, S; Jacka, F N; Meyer, D

    2017-01-01

    Key lifestyle-environ risk factors are operative for depression, but it is unclear how risk factors cluster. Machine-learning (ML) algorithms exist that learn, extract, identify and map underlying patterns to identify groupings of depressed individuals without constraints. The aim of this research was to use a large epidemiological study to identify and characterise depression clusters through "Graphing lifestyle-environs using machine-learning methods" (GLUMM). Two ML algorithms were implemented: unsupervised Self-organised mapping (SOM) to create GLUMM clusters and a supervised boosted regression algorithm to describe clusters. Ninety-six "lifestyle-environ" variables were used from the National health and nutrition examination study (2009-2010). Multivariate logistic regression validated clusters and controlled for possible sociodemographic confounders. The SOM identified two GLUMM cluster solutions. These solutions contained one dominant depressed cluster (GLUMM5-1, GLUMM7-1). Equal proportions of members in each cluster rated as highly depressed (17%). Alcohol consumption and demographics validated clusters. Boosted regression identified GLUMM5-1 as more informative than GLUMM7-1. Members were more likely to: have problems sleeping; unhealthy eating; ≤2 years in their home; an old home; perceive themselves underweight; exposed to work fumes; experienced sex at ≤14 years; not perform moderate recreational activities. A positive relationship between GLUMM5-1 (OR: 7.50, P<0.001) and GLUMM7-1 (OR: 7.88, P<0.001) with depression was found, with significant interactions with those married/living with partner (P=0.001). Using ML based GLUMM to form ordered depressive clusters from multitudinous lifestyle-environ variables enabled a deeper exploration of the heterogeneous data to uncover better understandings into relationships between the complex mental health factors. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Brain Graph Topology Changes Associated with Anti-Epileptic Drug Use

    PubMed Central

    Levin, Harvey S.; Chiang, Sharon

    2015-01-01

    Abstract Neuroimaging studies of functional connectivity using graph theory have furthered our understanding of the network structure in temporal lobe epilepsy (TLE). Brain network effects of anti-epileptic drugs could influence such studies, but have not been systematically studied. Resting-state functional MRI was analyzed in 25 patients with TLE using graph theory analysis. Patients were divided into two groups based on anti-epileptic medication use: those taking carbamazepine/oxcarbazepine (CBZ/OXC) (n=9) and those not taking CBZ/OXC (n=16) as a part of their medication regimen. The following graph topology metrics were analyzed: global efficiency, betweenness centrality (BC), clustering coefficient, and small-world index. Multiple linear regression was used to examine the association of CBZ/OXC with graph topology. The two groups did not differ from each other based on epilepsy characteristics. Use of CBZ/OXC was associated with a lower BC. Longer epilepsy duration was also associated with a lower BC. These findings can inform graph theory-based studies in patients with TLE. The changes observed are discussed in relation to the anti-epileptic mechanism of action and adverse effects of CBZ/OXC. PMID:25492633

  16. A new measure based on degree distribution that links information theory and network graph analysis

    PubMed Central

    2012-01-01

    Background Detailed connection maps of human and nonhuman brains are being generated with new technologies, and graph metrics have been instrumental in understanding the general organizational features of these structures. Neural networks appear to have small world properties: they have clustered regions, while maintaining integrative features such as short average pathlengths. Results We captured the structural characteristics of clustered networks with short average pathlengths through our own variable, System Difference (SD), which is computationally simple and calculable for larger graph systems. SD is a Jaccardian measure generated by averaging all of the differences in the connection patterns between any two nodes of a system. We calculated SD over large random samples of matrices and found that high SD matrices have a low average pathlength and a larger number of clustered structures. SD is a measure of degree distribution with high SD matrices maximizing entropic properties. Phi (Φ), an information theory metric that assesses a system’s capacity to integrate information, correlated well with SD - with SD explaining over 90% of the variance in systems above 11 nodes (tested for 4 to 13 nodes). However, newer versions of Φ do not correlate well with the SD metric. Conclusions The new network measure, SD, provides a link between high entropic structures and degree distributions as related to small world properties. PMID:22726594

  17. An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems

    PubMed Central

    Dawson, Kevin J.; Belkhir, Khalid

    2009-01-01

    Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306

  18. Bipartite Graphs as Models of Population Structures in Evolutionary Multiplayer Games

    PubMed Central

    Peña, Jorge; Rochat, Yannick

    2012-01-01

    By combining evolutionary game theory and graph theory, “games on graphs” study the evolutionary dynamics of frequency-dependent selection in population structures modeled as geographical or social networks. Networks are usually represented by means of unipartite graphs, and social interactions by two-person games such as the famous prisoner’s dilemma. Unipartite graphs have also been used for modeling interactions going beyond pairwise interactions. In this paper, we argue that bipartite graphs are a better alternative to unipartite graphs for describing population structures in evolutionary multiplayer games. To illustrate this point, we make use of bipartite graphs to investigate, by means of computer simulations, the evolution of cooperation under the conventional and the distributed N-person prisoner’s dilemma. We show that several implicit assumptions arising from the standard approach based on unipartite graphs (such as the definition of replacement neighborhoods, the intertwining of individual and group diversity, and the large overlap of interaction neighborhoods) can have a large impact on the resulting evolutionary dynamics. Our work provides a clear example of the importance of construction procedures in games on graphs, of the suitability of bigraphs and hypergraphs for computational modeling, and of the importance of concepts from social network analysis such as centrality, centralization and bipartite clustering for the understanding of dynamical processes occurring on networked population structures. PMID:22970237

  19. Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.

    PubMed

    Peng, Yong; Lu, Bao-Liang; Wang, Suhang

    2015-05-01

    Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models

    NASA Technical Reports Server (NTRS)

    Mjoisness, Eric; Castano, Rebecca; Gray, Alexander

    1999-01-01

    We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.

  1. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diamant, A; Ybarra, N; Seuntjens, J

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigatedmore » a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible to accurately model the prognosis of an individual lung SBRT patient’s treatment.« less

  2. Semi-supervised clustering for parcellating brain regions based on resting state fMRI data

    NASA Astrophysics Data System (ADS)

    Cheng, Hewei; Fan, Yong

    2014-03-01

    Many unsupervised clustering techniques have been adopted for parcellating brain regions of interest into functionally homogeneous subregions based on resting state fMRI data. However, the unsupervised clustering techniques are not able to take advantage of exiting knowledge of the functional neuroanatomy readily available from studies of cytoarchitectonic parcellation or meta-analysis of the literature. In this study, we propose a semi-supervised clustering method for parcellating amygdala into functionally homogeneous subregions based on resting state fMRI data. Particularly, the semi-supervised clustering is implemented under the framework of graph partitioning, and adopts prior information and spatial consistent constraints to obtain a spatially contiguous parcellation result. The graph partitioning problem is solved using an efficient algorithm similar to the well-known weighted kernel k-means algorithm. Our method has been validated for parcellating amygdala into 3 subregions based on resting state fMRI data of 28 subjects. The experiment results have demonstrated that the proposed method is more robust than unsupervised clustering and able to parcellate amygdala into centromedial, laterobasal, and superficial parts with improved functionally homogeneity compared with the cytoarchitectonic parcellation result. The validity of the parcellation results is also supported by distinctive functional and structural connectivity patterns of the subregions and high consistency between coactivation patterns derived from a meta-analysis and functional connectivity patterns of corresponding subregions.

  3. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  4. Offdiagonal complexity: A computationally quick complexity measure for graphs and networks

    NASA Astrophysics Data System (ADS)

    Claussen, Jens Christian

    2007-02-01

    A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.

  5. Learning locality preserving graph from data.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Hou, Xinwen; Liu, Cheng-Lin

    2014-11-01

    Machine learning based on graph representation, or manifold learning, has attracted great interest in recent years. As the discrete approximation of data manifold, the graph plays a crucial role in these kinds of learning approaches. In this paper, we propose a novel learning method for graph construction, which is distinct from previous methods in that it solves an optimization problem with the aim of directly preserving the local information of the original data set. We show that the proposed objective has close connections with the popular Laplacian Eigenmap problem, and is hence well justified. The optimization turns out to be a quadratic programming problem with n(n-1)/2 variables (n is the number of data points). Exploiting the sparsity of the graph, we further propose a more efficient cutting plane algorithm to solve the problem, making the method better scalable in practice. In the context of clustering and semi-supervised learning, we demonstrated the advantages of our proposed method by experiments.

  6. A machine learning approach to graph-theoretical cluster expansions of the energy of adsorbate layers

    NASA Astrophysics Data System (ADS)

    Vignola, Emanuele; Steinmann, Stephan N.; Vandegehuchte, Bart D.; Curulla, Daniel; Stamatakis, Michail; Sautet, Philippe

    2017-08-01

    The accurate description of the energy of adsorbate layers is crucial for the understanding of chemistry at interfaces. For heterogeneous catalysis, not only the interaction of the adsorbate with the surface but also the adsorbate-adsorbate lateral interactions significantly affect the activation energies of reactions. Modeling the interactions of the adsorbates with the catalyst surface and with each other can be efficiently achieved in the cluster expansion Hamiltonian formalism, which has recently been implemented in a graph-theoretical kinetic Monte Carlo (kMC) scheme to describe multi-dentate species. Automating the development of the cluster expansion Hamiltonians for catalytic systems is challenging and requires the mapping of adsorbate configurations for extended adsorbates onto a graphical lattice. The current work adopts machine learning methods to reach this goal. Clusters are automatically detected based on formalized, but intuitive chemical concepts. The corresponding energy coefficients for the cluster expansion are calculated by an inversion scheme. The potential of this method is demonstrated for the example of ethylene adsorption on Pd(111), for which we propose several expansions, depending on the graphical lattice. It turns out that for this system, the best description is obtained as a combination of single molecule patterns and a few coupling terms accounting for lateral interactions.

  7. BiCluE - Exact and heuristic algorithms for weighted bi-cluster editing of biomedical data

    PubMed Central

    2013-01-01

    Background The explosion of biological data has dramatically reformed today's biology research. The biggest challenge to biologists and bioinformaticians is the integration and analysis of large quantity of data to provide meaningful insights. One major problem is the combined analysis of data from different types. Bi-cluster editing, as a special case of clustering, which partitions two different types of data simultaneously, might be used for several biomedical scenarios. However, the underlying algorithmic problem is NP-hard. Results Here we contribute with BiCluE, a software package designed to solve the weighted bi-cluster editing problem. It implements (1) an exact algorithm based on fixed-parameter tractability and (2) a polynomial-time greedy heuristics based on solving the hardest part, edge deletions, first. We evaluated its performance on artificial graphs. Afterwards we exemplarily applied our implementation on real world biomedical data, GWAS data in this case. BiCluE generally works on any kind of data types that can be modeled as (weighted or unweighted) bipartite graphs. Conclusions To our knowledge, this is the first software package solving the weighted bi-cluster editing problem. BiCluE as well as the supplementary results are available online at http://biclue.mpi-inf.mpg.de. PMID:24565035

  8. Normalized Cut Algorithm for Automated Assignment of Protein Domains

    NASA Technical Reports Server (NTRS)

    Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.

  9. Unsupervised spatiotemporal analysis of fMRI data using graph-based visualizations of self-organizing maps.

    PubMed

    Katwal, Santosh B; Gore, John C; Marois, Rene; Rogers, Baxter P

    2013-09-01

    We present novel graph-based visualizations of self-organizing maps for unsupervised functional magnetic resonance imaging (fMRI) analysis. A self-organizing map is an artificial neural network model that transforms high-dimensional data into a low-dimensional (often a 2-D) map using unsupervised learning. However, a postprocessing scheme is necessary to correctly interpret similarity between neighboring node prototypes (feature vectors) on the output map and delineate clusters and features of interest in the data. In this paper, we used graph-based visualizations to capture fMRI data features based upon 1) the distribution of data across the receptive fields of the prototypes (density-based connectivity); and 2) temporal similarities (correlations) between the prototypes (correlation-based connectivity). We applied this approach to identify task-related brain areas in an fMRI reaction time experiment involving a visuo-manual response task, and we correlated the time-to-peak of the fMRI responses in these areas with reaction time. Visualization of self-organizing maps outperformed independent component analysis and voxelwise univariate linear regression analysis in identifying and classifying relevant brain regions. We conclude that the graph-based visualizations of self-organizing maps help in advanced visualization of cluster boundaries in fMRI data enabling the separation of regions with small differences in the timings of their brain responses.

  10. On the blind use of statistical tools in the analysis of globular cluster stars

    NASA Astrophysics Data System (ADS)

    D'Antona, Francesca; Caloi, Vittoria; Tailo, Marco

    2018-04-01

    As with most data analysis methods, the Bayesian method must be handled with care. We show that its application to determine stellar evolution parameters within globular clusters can lead to paradoxical results if used without the necessary precautions. This is a cautionary tale on the use of statistical tools for big data analysis.

  11. Uncertainty Quantification of Hypothesis Testing for the Integrated Knowledge Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuellar, Leticia

    2012-05-31

    The Integrated Knowledge Engine (IKE) is a tool of Bayesian analysis, based on Bayesian Belief Networks or Bayesian networks for short. A Bayesian network is a graphical model (directed acyclic graph) that allows representing the probabilistic structure of many variables assuming a localized type of dependency called the Markov property. The Markov property in this instance makes any node or random variable to be independent of any non-descendant node given information about its parent. A direct consequence of this property is that it is relatively easy to incorporate new evidence and derive the appropriate consequences, which in general is notmore » an easy or feasible task. Typically we use Bayesian networks as predictive models for a small subset of the variables, either the leave nodes or the root nodes. In IKE, since most applications deal with diagnostics, we are interested in predicting the likelihood of the root nodes given new observations on any of the children nodes. The root nodes represent the various possible outcomes of the analysis, and an important problem is to determine when we have gathered enough evidence to lean toward one of these particular outcomes. This document presents criteria to decide when the evidence gathered is sufficient to draw a particular conclusion or decide in favor of a particular outcome by quantifying the uncertainty in the conclusions that are drawn from the data. The material in this document is organized as follows: Section 2 presents briefly a forensics Bayesian network, and we explore evaluating the information provided by new evidence by looking first at the posterior distribution of the nodes of interest, and then at the corresponding posterior odds ratios. Section 3 presents a third alternative: Bayes Factors. In section 4 we finalize by showing the relation between the posterior odds ratios and Bayes factors and showing examples these cases, and in section 5 we conclude by providing clear guidelines of how to use these for the type of Bayesian networks used in IKE.« less

  12. Reproducibility of graph metrics of human brain structural networks.

    PubMed

    Duda, Jeffrey T; Cook, Philip A; Gee, James C

    2014-01-01

    Recent interest in human brain connectivity has led to the application of graph theoretical analysis to human brain structural networks, in particular white matter connectivity inferred from diffusion imaging and fiber tractography. While these methods have been used to study a variety of patient populations, there has been less examination of the reproducibility of these methods. A number of tractography algorithms exist and many of these are known to be sensitive to user-selected parameters. The methods used to derive a connectivity matrix from fiber tractography output may also influence the resulting graph metrics. Here we examine how these algorithm and parameter choices influence the reproducibility of proposed graph metrics on a publicly available test-retest dataset consisting of 21 healthy adults. The dice coefficient is used to examine topological similarity of constant density subgraphs both within and between subjects. Seven graph metrics are examined here: mean clustering coefficient, characteristic path length, largest connected component size, assortativity, global efficiency, local efficiency, and rich club coefficient. The reproducibility of these network summary measures is examined using the intraclass correlation coefficient (ICC). Graph curves are created by treating the graph metrics as functions of a parameter such as graph density. Functional data analysis techniques are used to examine differences in graph measures that result from the choice of fiber tracking algorithm. The graph metrics consistently showed good levels of reproducibility as measured with ICC, with the exception of some instability at low graph density levels. The global and local efficiency measures were the most robust to the choice of fiber tracking algorithm.

  13. Limitations of cytochrome oxidase I for the barcoding of Neritidae (Mollusca: Gastropoda) as revealed by Bayesian analysis.

    PubMed

    Chee, S Y

    2015-05-25

    The mitochondrial DNA (mtDNA) cytochrome oxidase I (COI) gene has been universally and successfully utilized as a barcoding gene, mainly because it can be amplified easily, applied across a wide range of taxa, and results can be obtained cheaply and quickly. However, in rare cases, the gene can fail to distinguish between species, particularly when exposed to highly sensitive methods of data analysis, such as the Bayesian method, or when taxa have undergone introgressive hybridization, over-splitting, or incomplete lineage sorting. Such cases require the use of alternative markers, and nuclear DNA markers are commonly used. In this study, a dendrogram produced by Bayesian analysis of an mtDNA COI dataset was compared with that of a nuclear DNA ATPS-α dataset, in order to evaluate the efficiency of COI in barcoding Malaysian nerites (Neritidae). In the COI dendrogram, most of the species were in individual clusters, except for two species: Nerita chamaeleon and N. histrio. These two species were placed in the same subcluster, whereas in the ATPS-α dendrogram they were in their own subclusters. Analysis of the ATPS-α gene also placed the two genera of nerites (Nerita and Neritina) in separate clusters, whereas COI gene analysis placed both genera in the same cluster. Therefore, in the case of the Neritidae, the ATPS-α gene is a better barcoding gene than the COI gene.

  14. Bayesian multivariate hierarchical transformation models for ROC analysis.

    PubMed

    O'Malley, A James; Zou, Kelly H

    2006-02-15

    A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.

  15. Bayesian multivariate hierarchical transformation models for ROC analysis

    PubMed Central

    O'Malley, A. James; Zou, Kelly H.

    2006-01-01

    SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836

  16. Clustering Qualitative Data Based on Binary Equivalence Relations: Neighborhood Search Heuristics for the Clique Partitioning Problem

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich

    2009-01-01

    The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal…

  17. An application of cluster detection to scene analysis

    NASA Technical Reports Server (NTRS)

    Rosenfeld, A. H.; Lee, Y. H.

    1971-01-01

    Certain arrangements of local features in a scene tend to group together and to be seen as units. It is suggested that in some instances, this phenomenon might be interpretable as a process of cluster detection in a graph-structured space derived from the scene. This idea is illustrated using a class of scenes that contain only horizontal and vertical line segments.

  18. Multi-angle backscatter classification and sub-bottom profiling for improved seafloor characterization

    NASA Astrophysics Data System (ADS)

    Alevizos, Evangelos; Snellen, Mirjam; Simons, Dick; Siemes, Kerstin; Greinert, Jens

    2018-06-01

    This study applies three classification methods exploiting the angular dependence of acoustic seafloor backscatter along with high resolution sub-bottom profiling for seafloor sediment characterization in the Eckernförde Bay, Baltic Sea Germany. This area is well suited for acoustic backscatter studies due to its shallowness, its smooth bathymetry and the presence of a wide range of sediment types. Backscatter data were acquired using a Seabeam1180 (180 kHz) multibeam echosounder and sub-bottom profiler data were recorded using a SES-2000 parametric sonar transmitting 6 and 12 kHz. The high density of seafloor soundings allowed extracting backscatter layers for five beam angles over a large part of the surveyed area. A Bayesian probability method was employed for sediment classification based on the backscatter variability at a single incidence angle, whereas Maximum Likelihood Classification (MLC) and Principal Components Analysis (PCA) were applied to the multi-angle layers. The Bayesian approach was used for identifying the optimum number of acoustic classes because cluster validation is carried out prior to class assignment and class outputs are ordinal categorical values. The method is based on the principle that backscatter values from a single incidence angle express a normal distribution for a particular sediment type. The resulting Bayesian classes were well correlated to median grain sizes and the percentage of coarse material. The MLC method uses angular response information from five layers of training areas extracted from the Bayesian classification map. The subsequent PCA analysis is based on the transformation of these five layers into two principal components that comprise most of the data variability. These principal components were clustered in five classes after running an external cluster validation test. In general both methods MLC and PCA, separated the various sediment types effectively, showing good agreement (kappa >0.7) with the Bayesian approach which also correlates well with ground truth data (r2 > 0.7). In addition, sub-bottom data were used in conjunction with the Bayesian classification results to characterize acoustic classes with respect to their geological and stratigraphic interpretation. The joined interpretation of seafloor and sub-seafloor data sets proved to be an efficient approach for a better understanding of seafloor backscatter patchiness and to discriminate acoustically similar classes in different geological/bathymetric settings.

  19. A Bayesian state-space approach for damage detection and classification

    NASA Astrophysics Data System (ADS)

    Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.

    2017-11-01

    The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.

  20. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    PubMed

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  1. Spread of information and infection on finite random networks

    NASA Astrophysics Data System (ADS)

    Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar

    2011-04-01

    The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.

  2. A Probabilistic Framework for Constructing Temporal Relations in Replica Exchange Molecular Trajectories.

    PubMed

    Chattopadhyay, Aditya; Zheng, Min; Waller, Mark Paul; Priyakumar, U Deva

    2018-05-23

    Knowledge of the structure and dynamics of biomolecules is essential for elucidating the underlying mechanisms of biological processes. Given the stochastic nature of many biological processes, like protein unfolding, it's almost impossible that two independent simulations will generate the exact same sequence of events, which makes direct analysis of simulations difficult. Statistical models like Markov Chains, transition networks etc. help in shedding some light on the mechanistic nature of such processes by predicting long-time dynamics of these systems from short simulations. However, such methods fall short in analyzing trajectories with partial or no temporal information, for example, replica exchange molecular dynamics or Monte Carlo simulations. In this work we propose a probabilistic algorithm, borrowing concepts from graph theory and machine learning, to extract reactive pathways from molecular trajectories in the absence of temporal data. A suitable vector representation was chosen to represent each frame in the macromolecular trajectory (as a series of interaction and conformational energies) and dimensionality reduction was performed using principal component analysis (PCA). The trajectory was then clustered using a density-based clustering algorithm, where each cluster represents a metastable state on the potential energy surface (PES) of the biomolecule under study. A graph was created with these clusters as nodes with the edges learnt using an iterative expectation maximization algorithm. The most reactive path is conceived as the widest path along this graph. We have tested our method on RNA hairpin unfolding trajectory in aqueous urea solution. Our method makes the understanding of the mechanism of unfolding in RNA hairpin molecule more tractable. As this method doesn't rely on temporal data it can be used to analyze trajectories from Monte Carlo sampling techniques and replica exchange molecular dynamics (REMD).

  3. Cross over of recurrence networks to random graphs and random geometric graphs

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2017-02-01

    Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.

  4. Listing All Maximal Cliques in Sparse Graphs in Near-optimal Time

    DTIC Science & Technology

    2011-01-01

    523 10 Arabisopsis thaliana 1745 3098 71 12 Drosophila melanogaster 7282 24894 176 12 Homo Sapiens 9527 31182 308 12 Schizosaccharomyces pombe 2031...clusters of actors [6,14,28,40] and may be used as features in exponential random graph models for statistical analysis of social networks [17,19,20,44,49...29. R. Horaud and T. Skordas. Stereo correspondence through feature grouping and maximal cliques. IEEE Trans. Patt. An. Mach. Int. 11(11):1168–1180

  5. A Simulation To Model Exponential Growth.

    ERIC Educational Resources Information Center

    Appelbaum, Elizabeth Berman

    2000-01-01

    Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)

  6. Clustering by reordering of similarity and Laplacian matrices: Application to galaxy clusters

    NASA Astrophysics Data System (ADS)

    Mahmoud, E.; Shoukry, A.; Takey, A.

    2018-04-01

    Similarity metrics, kernels and similarity-based algorithms have gained much attention due to their increasing applications in information retrieval, data mining, pattern recognition and machine learning. Similarity Graphs are often adopted as the underlying representation of similarity matrices and are at the origin of known clustering algorithms such as spectral clustering. Similarity matrices offer the advantage of working in object-object (two-dimensional) space where visualization of clusters similarities is available instead of object-features (multi-dimensional) space. In this paper, sparse ɛ-similarity graphs are constructed and decomposed into strong components using appropriate methods such as Dulmage-Mendelsohn permutation (DMperm) and/or Reverse Cuthill-McKee (RCM) algorithms. The obtained strong components correspond to groups (clusters) in the input (feature) space. Parameter ɛi is estimated locally, at each data point i from a corresponding narrow range of the number of nearest neighbors. Although more advanced clustering techniques are available, our method has the advantages of simplicity, better complexity and direct visualization of the clusters similarities in a two-dimensional space. Also, no prior information about the number of clusters is needed. We conducted our experiments on two and three dimensional, low and high-sized synthetic datasets as well as on an astronomical real-dataset. The results are verified graphically and analyzed using gap statistics over a range of neighbors to verify the robustness of the algorithm and the stability of the results. Combining the proposed algorithm with gap statistics provides a promising tool for solving clustering problems. An astronomical application is conducted for confirming the existence of 45 galaxy clusters around the X-ray positions of galaxy clusters in the redshift range [0.1..0.8]. We re-estimate the photometric redshifts of the identified galaxy clusters and obtain acceptable values compared to published spectroscopic redshifts with a 0.029 standard deviation of their differences.

  7. Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids.

    PubMed

    José, Marco V; Morgado, Eberto R; Guimarães, Romeu Cardoso; Zamudio, Gabriel S; de Farías, Sávio Torres; Bobadilla, Juan R; Sosa, Daniela

    2014-08-11

    Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state.

  8. A graph lattice approach to maintaining and learning dense collections of subgraphs as image features.

    PubMed

    Saund, Eric

    2013-10-01

    Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.

  9. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  10. Dynamical Graph Theory Networks Methods for the Analysis of Sparse Functional Connectivity Networks and for Determining Pinning Observability in Brain Networks

    PubMed Central

    Meyer-Bäse, Anke; Roberts, Rodney G.; Illan, Ignacio A.; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja

    2017-01-01

    Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts. PMID:29051730

  11. Dynamical Graph Theory Networks Methods for the Analysis of Sparse Functional Connectivity Networks and for Determining Pinning Observability in Brain Networks.

    PubMed

    Meyer-Bäse, Anke; Roberts, Rodney G; Illan, Ignacio A; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja

    2017-01-01

    Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts.

  12. Species-richness of the Anopheles annulipes Complex (Diptera: Culicidae) Revealed by Tree and Model-Based Allozyme Clustering Analyses

    DTIC Science & Technology

    2007-01-01

    including tree- based methods such as the unweighted pair group method of analysis ( UPGMA ) and Neighbour-joining (NJ) (Saitou & Nei, 1987). By...based Bayesian approach and the tree-based UPGMA and NJ cluster- ing methods. The results obtained suggest that far more species occur in the An...unlikely that groups that differ by more than these levels are conspecific. Genetic distances were clustered using the UPGMA and NJ algorithms in MEGA

  13. Evaluation of BLAST-based edge-weighting metrics used for homology inference with the Markov Clustering algorithm.

    PubMed

    Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F

    2015-07-10

    Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.

  14. Reticulate classification of mosaic microbial genomes using NeAT website.

    PubMed

    Lima-Mendez, Gipsi

    2012-01-01

    The tree of life is the classical representation of the evolutionary relationships between existent species. A tree is appropriate to display the divergence of species through mutation, i.e., by vertical descent. However, lateral gene transfer (LGT) is excluded from such representations. When LGT contribution to genome evolution cannot be neglected (e.g., for prokaryotes and mobile genetic elements), the tree becomes misleading. Networks appear as an intuitive way to represent both vertical and horizontal relationships, while overlapping groups within such graphs are more suitable for their classification. Here, we describe a method to represent both vertical and horizontal relationships. We start with a set of genomes whose coded proteins have been grouped into families based on sequence similarity. Next, all pairs of genomes are compared, counting the number of proteins classified into the same family. From this comparison, we derive a weighted graph where genomes with a significant number of similar proteins are linked. Finally, we apply a two-step clustering of this graph to produce a classification where nodes can be assigned to multiple clusters. The procedure can be performed using the Network Analysis Tools (NeAT) website.

  15. High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan

    2010-10-04

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less

  16. Functional brain networks in Alzheimer's disease: EEG analysis based on limited penetrable visibility graph and phase space method

    NASA Astrophysics Data System (ADS)

    Wang, Jiang; Yang, Chen; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing

    2016-10-01

    In this paper, EEG series are applied to construct functional connections with the correlation between different regions in order to investigate the nonlinear characteristic and the cognitive function of the brain with Alzheimer's disease (AD). First, limited penetrable visibility graph (LPVG) and phase space method map single EEG series into networks, and investigate the underlying chaotic system dynamics of AD brain. Topological properties of the networks are extracted, such as average path length and clustering coefficient. It is found that the network topology of AD in several local brain regions are different from that of the control group with no statistically significant difference existing all over the brain. Furthermore, in order to detect the abnormality of AD brain as a whole, functional connections among different brain regions are reconstructed based on similarity of clustering coefficient sequence (CCSS) of EEG series in the four frequency bands (delta, theta, alpha, and beta), which exhibit obvious small-world properties. Graph analysis demonstrates that for both methodologies, the functional connections between regions of AD brain decrease, particularly in the alpha frequency band. AD causes the graph index complexity of the functional network decreased, the small-world properties weakened, and the vulnerability increased. The obtained results show that the brain functional network constructed by LPVG and phase space method might be more effective to distinguish AD from the normal control than the analysis of single series, which is helpful for revealing the underlying pathological mechanism of the disease.

  17. Higher-order clustering in networks

    NASA Astrophysics Data System (ADS)

    Yin, Hao; Benson, Austin R.; Leskovec, Jure

    2018-05-01

    A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains.

  18. Bayesian Estimation and Inference Using Stochastic Electronics

    PubMed Central

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326

  19. Bayesian Estimation and Inference Using Stochastic Electronics.

    PubMed

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  20. PyClone: statistical inference of clonal population structure in cancer.

    PubMed

    Roth, Andrew; Khattra, Jaswinder; Yap, Damian; Wan, Adrian; Laks, Emma; Biele, Justina; Ha, Gavin; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P

    2014-04-01

    We introduce PyClone, a statistical model for inference of clonal population structures in cancers. PyClone is a Bayesian clustering method for grouping sets of deeply sequenced somatic mutations into putative clonal clusters while estimating their cellular prevalences and accounting for allelic imbalances introduced by segmental copy-number changes and normal-cell contamination. Single-cell sequencing validation demonstrates PyClone's accuracy.

  1. A practical Bayesian stepped wedge design for community-based cluster-randomized clinical trials: The British Columbia Telehealth Trial.

    PubMed

    Cunanan, Kristen M; Carlin, Bradley P; Peterson, Kevin A

    2016-12-01

    Many clinical trial designs are impractical for community-based clinical intervention trials. Stepped wedge trial designs provide practical advantages, but few descriptions exist of their clinical implementational features, statistical design efficiencies, and limitations. Enhance efficiency of stepped wedge trial designs by evaluating the impact of design characteristics on statistical power for the British Columbia Telehealth Trial. The British Columbia Telehealth Trial is a community-based, cluster-randomized, controlled clinical trial in rural and urban British Columbia. To determine the effect of an Internet-based telehealth intervention on healthcare utilization, 1000 subjects with an existing diagnosis of congestive heart failure or type 2 diabetes will be enrolled from 50 clinical practices. Hospital utilization is measured using a composite of disease-specific hospital admissions and emergency visits. The intervention comprises online telehealth data collection and counseling provided to support a disease-specific action plan developed by the primary care provider. The planned intervention is sequentially introduced across all participating practices. We adopt a fully Bayesian, Markov chain Monte Carlo-driven statistical approach, wherein we use simulation to determine the effect of cluster size, sample size, and crossover interval choice on type I error and power to evaluate differences in hospital utilization. For our Bayesian stepped wedge trial design, simulations suggest moderate decreases in power when crossover intervals from control to intervention are reduced from every 3 to 2 weeks, and dramatic decreases in power as the numbers of clusters decrease. Power and type I error performance were not notably affected by the addition of nonzero cluster effects or a temporal trend in hospitalization intensity. Stepped wedge trial designs that intervene in small clusters across longer periods can provide enhanced power to evaluate comparative effectiveness, while offering practical implementation advantages in geographic stratification, temporal change, use of existing data, and resource distribution. Current population estimates were used; however, models may not reflect actual event rates during the trial. In addition, temporal or spatial heterogeneity can bias treatment effect estimates. © The Author(s) 2016.

  2. Protein domain organisation: adding order.

    PubMed

    Kummerfeld, Sarah K; Teichmann, Sarah A

    2009-01-29

    Domains are the building blocks of proteins. During evolution, they have been duplicated, fused and recombined, to produce proteins with novel structures and functions. Structural and genome-scale studies have shown that pairs or groups of domains observed together in a protein are almost always found in only one N to C terminal order and are the result of a single recombination event that has been propagated by duplication of the multi-domain unit. Previous studies of domain organisation have used graph theory to represent the co-occurrence of domains within proteins. We build on this approach by adding directionality to the graphs and connecting nodes based on their relative order in the protein. Most of the time, the linear order of domains is conserved. However, using the directed graph representation we have identified non-linear features of domain organization that are over-represented in genomes. Recognising these patterns and unravelling how they have arisen may allow us to understand the functional relationships between domains and understand how the protein repertoire has evolved. We identify groups of domains that are not linearly conserved, but instead have been shuffled during evolution so that they occur in multiple different orders. We consider 192 genomes across all three kingdoms of life and use domain and protein annotation to understand their functional significance. To identify these features and assess their statistical significance, we represent the linear order of domains in proteins as a directed graph and apply graph theoretical methods. We describe two higher-order patterns of domain organisation: clusters and bi-directionally associated domain pairs and explore their functional importance and phylogenetic conservation. Taking into account the order of domains, we have derived a novel picture of global protein organization. We found that all genomes have a higher than expected degree of clustering and more domain pairs in forward and reverse orientation in different proteins relative to random graphs with identical degree distributions. While these features were statistically over-represented, they are still fairly rare. Looking in detail at the proteins involved, we found strong functional relationships within each cluster. In addition, the domains tended to be involved in protein-protein interaction and are able to function as independent structural units. A particularly striking example was the human Jak-STAT signalling pathway which makes use of a set of domains in a range of orders and orientations to provide nuanced signaling functionality. This illustrated the importance of functional and structural constraints (or lack thereof) on domain organisation.

  3. Latent structure modeling underlying theophylline tablet formulations using a Bayesian network based on a self-organizing map clustering.

    PubMed

    Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2015-01-01

    The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and design space. In this article, we integrate thin-plate spline (TPS) interpolation, Kohonen's self-organizing map (SOM) and a Bayesian network (BN) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared using a standard formulation. We measured the tensile strength and disintegration time as response variables and the compressibility, cohesion and dispersibility of the pretableting blend as latent variables. We predicted these variables quantitatively using nonlinear TPS, generated a large amount of data on pretableting blends and tablets and clustered these data into several clusters using a SOM. Our results show that we are able to predict the experimental values of the latent and response variables with a high degree of accuracy and are able to classify the tablet data into several distinct clusters. In addition, to visualize the latent structure between the causal and latent factors and the response variables, we applied a BN method to the SOM clustering results. We found that despite having inserted latent variables between the causal factors and response variables, their relation is equivalent to the results for the SOM clustering, and thus we are able to explain the underlying latent structure. Consequently, this technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulation.

  4. Astrostatistical Analysis in Solar and Stellar Physics

    NASA Astrophysics Data System (ADS)

    Stenning, David Craig

    This dissertation focuses on developing statistical models and methods to address data-analytic challenges in astrostatistics---a growing interdisciplinary field fostering collaborations between statisticians and astrophysicists. The astrostatistics projects we tackle can be divided into two main categories: modeling solar activity and Bayesian analysis of stellar evolution. These categories from Part I and Part II of this dissertation, respectively. The first line of research we pursue involves classification and modeling of evolving solar features. Advances in space-based observatories are increasing both the quality and quantity of solar data, primarily in the form of high-resolution images. To analyze massive streams of solar image data, we develop a science-driven dimension reduction methodology to extract scientifically meaningful features from images. This methodology utilizes mathematical morphology to produce a concise numerical summary of the magnetic flux distribution in solar "active regions'' that (i) is far easier to work with than the source images, (ii) encapsulates scientifically relevant information in a more informative manner than existing schemes (i.e., manual classification schemes), and (iii) is amenable to sophisticated statistical analyses. In a related line of research, we perform a Bayesian analysis of the solar cycle using multiple proxy variables, such as sunspot numbers. We take advantage of patterns and correlations among the proxy variables to model solar activity using data from proxies that have become available more recently, while also taking advantage of the long history of observations of sunspot numbers. This model is an extension of the Yu et al. (2012) Bayesian hierarchical model for the solar cycle that used the sunspot numbers alone. Since proxies have different temporal coverage, we devise a multiple imputation scheme to account for missing data. We find that incorporating multiple proxies reveals important features of the solar cycle that are missed when the model is fit using only the sunspot numbers. In Part II of this dissertation we focus on two related lines of research involving Bayesian analysis of stellar evolution. We first focus on modeling multiple stellar populations in star clusters. It has long been assumed that all star clusters are comprised of single stellar populations---stars that formed at roughly the same time from a common molecular cloud. However, recent studies have produced evidence that some clusters host multiple populations, which has far-reaching scientific implications. We develop a Bayesian hierarchical model for multiple-population star clusters, extending earlier statistical models of stellar evolution (e.g., van Dyk et al. 2009, Stein et al. 2013). We also devise an adaptive Markov chain Monte Carlo algorithm to explore the complex posterior distribution. We use numerical studies to demonstrate that our method can recover parameters of multiple-population clusters, and also show how model misspecification can be diagnosed. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We also explore statistical properties of the estimators and determine that the influence of the prior distribution does not diminish with larger sample sizes, leading to non-standard asymptotics. In a final line of research, we present the first-ever attempt to estimate the carbon fraction of white dwarfs. This quantity has important implications for both astrophysics and fundamental nuclear physics, but is currently unknown. We use a numerical study to demonstrate that assuming an incorrect value for the carbon fraction leads to incorrect white-dwarf ages of star clusters. Finally, we present our attempt to estimate the carbon fraction of the white dwarfs in the well-studied star cluster 47 Tucanae.

  5. Graph theoretical model of a sensorimotor connectome in zebrafish.

    PubMed

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  6. A density-based clustering model for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang; Li, Yantao; Qu, Zehui

    2018-04-01

    Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.

  7. New measurements of radial velocities in clusters of galaxies. II

    NASA Astrophysics Data System (ADS)

    Proust, D.; Mazure, A.; Sodre, L.; Capelato, H.; Lund, G.

    1988-03-01

    Heliocentric radial velocities are determined for 100 galaxies in five clusters, on the basis of 380-518-nm observations obtained using a CCD detector coupled by optical fibers to the OCTOPUS multiobject spectrograph at the Cassegrain focus of the 3.6-m telescope at ESO La Silla. The data-reduction procedures and error estimates are discussed, and the results are presented in tables and graphs and briefly characterized.

  8. CLUSTAG: hierarchical clustering and graph methods for selecting tag SNPs.

    PubMed

    Ao, S I; Yip, Kevin; Ng, Michael; Cheung, David; Fong, Pui-Yee; Melhado, Ian; Sham, Pak C

    2005-04-15

    Cluster and set-cover algorithms are developed to obtain a set of tag single nucleotide polymorphisms (SNPs) that can represent all the known SNPs in a chromosomal region, subject to the constraint that all SNPs must have a squared correlation R2>C with at least one tag SNP, where C is specified by the user. http://hkumath.hku.hk/web/link/CLUSTAG/CLUSTAG.html mng@maths.hku.hk.

  9. Discriminative clustering on manifold for adaptive transductive classification.

    PubMed

    Zhang, Zhao; Jia, Lei; Zhang, Min; Li, Bing; Zhang, Li; Li, Fanzhang

    2017-10-01

    In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space. For transductive classification by our formulation, we first perform the joint discriminative K-means clustering and manifold learning to capture the low-dimensional nonlinear manifolds. Then, we construct the adaptive weights over the learnt manifold features, where the adaptive weights are calculated through performing the joint minimization of the reconstruction errors over features and soft labels so that the graph weights can be joint-optimal for data representation and classification. Using the adaptive weights, we can easily estimate the unknown labels of samples. After that, our method returns the updated weights for further updating the manifold features. Extensive simulations on image classification and segmentation show that our proposed algorithm can deliver the state-of-the-art performance on several public datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids

    PubMed Central

    José, Marco V.; Morgado, Eberto R.; Guimarães, Romeu Cardoso; Zamudio, Gabriel S.; de Farías, Sávio Torres; Bobadilla, Juan R.; Sosa, Daniela

    2014-01-01

    Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state. PMID:25370377

  11. Quantify spatial relations to discover handwritten graphical symbols

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Mouchère, Harold; Viard-Gaudin, Christian

    2012-01-01

    To model a handwritten graphical language, spatial relations describe how the strokes are positioned in the 2-dimensional space. Most of existing handwriting recognition systems make use of some predefined spatial relations. However, considering a complex graphical language, it is hard to express manually all the spatial relations. Another possibility would be to use a clustering technique to discover the spatial relations. In this paper, we discuss how to create a relational graph between strokes (nodes) labeled with graphemes in a graphical language. Then we vectorize spatial relations (edges) for clustering and quantization. As the targeted application, we extract the repetitive sub-graphs (graphical symbols) composed of graphemes and learned spatial relations. On two handwriting databases, a simple mathematical expression database and a complex flowchart database, the unsupervised spatial relations outperform the predefined spatial relations. In addition, we visualize the frequent patterns on two text-lines containing Chinese characters.

  12. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  13. Recent Transmission Clustering of HIV-1 C and CRF17_BF Strains Characterized by NNRTI-Related Mutations among Newly Diagnosed Men in Central Italy

    PubMed Central

    Orchi, Nicoletta; Gori, Caterina; Bertoli, Ada; Forbici, Federica; Montella, Francesco; Pennica, Alfredo; De Carli, Gabriella; Giuliani, Massimo; Continenza, Fabio; Pinnetti, Carmela; Nicastri, Emanuele; Ceccherini-Silberstein, Francesca; Mastroianni, Claudio Maria; Girardi, Enrico; Andreoni, Massimo; Antinori, Andrea; Santoro, Maria Mercedes; Perno, Carlo Federico

    2015-01-01

    Background Increased evidence of relevant HIV-1 epidemic transmission in European countries is being reported, with an increased circulation of non-B-subtypes. Here, we present two recent HIV-1 non-B transmission clusters characterized by NNRTI-related amino-acidic mutations among newly diagnosed HIV-1 infected men, living in Rome (Central-Italy). Methods Pol and V3 sequences were available at the time of diagnosis for all individuals. Maximum-Likelihood and Bayesian phylogenetic-trees with bootstrap and Bayesian-probability supports defined transmission-clusters. HIV-1 drug-resistance and V3-tropism were also evaluated. Results Among 534 new HIV-1 non-B cases, diagnosed from 2011 to 2014, in Central-Italy, 35 carried virus gathering in two distinct clusters, including 27 HIV-1 C and 8 CRF17_BF subtypes, respectively. Both clusters were centralized in Rome, and their origin was estimated to have been after 2007. All individuals within both clusters were males and 37.1% of them had been recently-infected. While C-cluster was entirely composed by Italian men-who-have-sex-with-men, with a median-age of 34 years (IQR:30–39), individuals in CRF17_BF-cluster were older, with a median-age of 51 years (IQR:48–59) and almost all reported sexual-contacts with men and women. All carried R5-tropic viruses, with evidence of atypical or resistance amino-acidic mutations related to NNRTI-drugs (K103Q in C-cluster, and K101E+E138K in CRF17_BF-cluster). Conclusions These two epidemiological clusters provided evidence of a strong and recent circulation of C and CRF17_BF strains in central Italy, characterized by NNRTI-related mutations among men engaging in high-risk behaviours. These findings underline the role of molecular epidemiology in identifying groups at increased risk of HIV-1 transmission, and in enhancing additional prevention efforts. PMID:26270824

  14. Developing a new Bayesian Risk Index for risk evaluation of soil contamination.

    PubMed

    Albuquerque, M T D; Gerassis, S; Sierra, C; Taboada, J; Martín, J E; Antunes, I M H R; Gallego, J R

    2017-12-15

    Industrial and agricultural activities heavily constrain soil quality. Potentially Toxic Elements (PTEs) are a threat to public health and the environment alike. In this regard, the identification of areas that require remediation is crucial. In the herein research a geochemical dataset (230 samples) comprising 14 elements (Cu, Pb, Zn, Ag, Ni, Mn, Fe, As, Cd, V, Cr, Ti, Al and S) was gathered throughout eight different zones distinguished by their main activity, namely, recreational, agriculture/livestock and heavy industry in the Avilés Estuary (North of Spain). Then a stratified systematic sampling method was used at short, medium, and long distances from each zone to obtain a representative picture of the total variability of the selected attributes. The information was then combined in four risk classes (Low, Moderate, High, Remediation) following reference values from several sediment quality guidelines (SQGs). A Bayesian analysis, inferred for each zone, allowed the characterization of PTEs correlations, the unsupervised learning network technique proving to be the best fit. Based on the Bayesian network structure obtained, Pb, As and Mn were selected as key contamination parameters. For these 3 elements, the conditional probability obtained was allocated to each observed point, and a simple, direct index (Bayesian Risk Index-BRI) was constructed as a linear rating of the pre-defined risk classes weighted by the previously obtained probability. Finally, the BRI underwent geostatistical modeling. One hundred Sequential Gaussian Simulations (SGS) were computed. The Mean Image and the Standard Deviation maps were obtained, allowing the definition of High/Low risk clusters (Local G clustering) and the computation of spatial uncertainty. High-risk clusters are mainly distributed within the area with the highest altitude (agriculture/livestock) showing an associated low spatial uncertainty, clearly indicating the need for remediation. Atmospheric emissions, mainly derived from the metallurgical industry, contribute to soil contamination by PTEs. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Algorithms and software used in selecting structure of machine-training cluster based on neurocomputers

    NASA Astrophysics Data System (ADS)

    Romanchuk, V. A.; Lukashenko, V. V.

    2018-05-01

    The technique of functioning of a control system by a computing cluster based on neurocomputers is proposed. Particular attention is paid to the method of choosing the structure of the computing cluster due to the fact that the existing methods are not effective because of a specialized hardware base - neurocomputers, which are highly parallel computer devices with an architecture different from the von Neumann architecture. A developed algorithm for choosing the computational structure of a cloud cluster is described, starting from the direction of data transfer in the flow control graph of the program and its adjacency matrix.

  16. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  17. Bayesian and Phylogenic Approaches for Studying Relationships among Table Olive Cultivars.

    PubMed

    Ben Ayed, Rayda; Ennouri, Karim; Ben Amar, Fathi; Moreau, Fabienne; Triki, Mohamed Ali; Rebai, Ahmed

    2017-08-01

    To enhance table olive tree authentication, relationship, and productivity, we consider the analysis of 18 worldwide table olive cultivars (Olea europaea L.) based on morphological, biological, and physicochemical markers analyzed by bioinformatic and biostatistic tools. Accordingly, we assess the relationships between the studied varieties, on the one hand, and the potential productivity-quantitative parameter links on the other hand. The bioinformatic analysis based on the graphical representation of the matrix of Euclidean distances, the principal components analysis, unweighted pair group method with arithmetic mean, and principal coordinate analysis (PCoA) revealed three major clusters which were not correlated with the geographic origin. The statistical analysis based on Kendall's and Spearman correlation coefficients suggests two highly significant associations with both fruit color and pollinization and the productivity character. These results are confirmed by the multiple linear regression prediction models. In fact, based on the coefficient of determination (R 2 ) value, the best model demonstrated the power of the pollinization on the tree productivity (R 2  = 0.846). Moreover, the derived directed acyclic graph showed that only two direct influences are detected: effect of tolerance on fruit and stone symmetry on side and effect of tolerance on stone form and oil content on the other side. This work provides better understanding of the diversity available in worldwide table olive cultivars and supplies an important contribution for olive breeding and authenticity.

  18. Learning topic models by belief propagation.

    PubMed

    Zeng, Jia; Cheung, William K; Liu, Jiming

    2013-05-01

    Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.

  19. Network inference using informative priors

    PubMed Central

    Mukherjee, Sach; Speed, Terence P.

    2008-01-01

    Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of “network inference” is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling. PMID:18799736

  20. Network inference using informative priors.

    PubMed

    Mukherjee, Sach; Speed, Terence P

    2008-09-23

    Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of "network inference" is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling.

  1. Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2018-03-01

    A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.

  2. High performance semantic factoring of giga-scale semantic graph databases.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    al-Saffar, Sinan; Adolf, Bob; Haglin, David

    2010-10-01

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less

  3. A graph-theoretical analysis algorithm for quantifying the transition from sensory input to motor output by an emotional stimulus.

    PubMed

    Karmonik, Christof; Fung, Steve H; Dulay, M; Verma, A; Grossman, Robert G

    2013-01-01

    Graph-theoretical analysis algorithms have been used for identifying subnetworks in the human brain during the Default Mode State. Here, these methods are expanded to determine the interaction of the sensory and the motor subnetworks during the performance of an approach-avoidance paradigm utilizing the correlation strength between the signal intensity time courses as measure of synchrony. From functional magnetic resonance imaging (fMRI) data of 9 healthy volunteers, two signal time courses, one from the primary visual cortex (sensory input) and one from the motor cortex (motor output) were identified and a correlation difference map was calculated. Graph networks were created from this map and visualized with spring-embedded layouts and 3D layouts in the original anatomical space. Functional clusters in these networks were identified with the MCODE clustering algorithm. Interactions between the sensory sub-network and the motor sub-network were quantified through the interaction strengths of these clusters. The percentages of interactions involving the visual cortex ranged from 85 % to 18 % and the motor cortex ranged from 40 % to 9 %. Other regions with high interactions were: frontal cortex (19 ± 18 %), insula (17 ± 22 %), cuneus (16 ± 15 %), supplementary motor area (SMA, 11 ± 18 %) and subcortical regions (11 ± 10 %). Interactions between motor cortex, SMA and visual cortex accounted for 12 %, between visual cortex and cuneus for 8 % and between motor cortex, SMA and cuneus for 6 % of all interactions. These quantitative findings are supported by the visual impressions from the 2D and 3D network layouts.

  4. Efficient Matrix Models for Relational Learning

    DTIC Science & Technology

    2009-10-01

    74 4.5.3 Comparison to pLSI- pHITS . . . . . . . . . . . . . . . . . . . . 76 5 Hierarchical Bayesian Collective...Behaviour of Newton vs. Stochastic Newton on a three-factor model. 4.5.3 Comparison to pLSI- pHITS Caveat: Collective Matrix Factorization makes no guarantees...leads to better results; and another where a co-clustering model, pLSI- pHITS , has the advantage. pLSI- pHITS [24] is a relational clustering technique

  5. Hierarchical imputation of systematically and sporadically missing data: An approximate Bayesian approach using chained equations.

    PubMed

    Jolani, Shahab

    2018-03-01

    In health and medical sciences, multiple imputation (MI) is now becoming popular to obtain valid inferences in the presence of missing data. However, MI of clustered data such as multicenter studies and individual participant data meta-analysis requires advanced imputation routines that preserve the hierarchical structure of data. In clustered data, a specific challenge is the presence of systematically missing data, when a variable is completely missing in some clusters, and sporadically missing data, when it is partly missing in some clusters. Unfortunately, little is known about how to perform MI when both types of missing data occur simultaneously. We develop a new class of hierarchical imputation approach based on chained equations methodology that simultaneously imputes systematically and sporadically missing data while allowing for arbitrary patterns of missingness among them. Here, we use a random effect imputation model and adopt a simplification over fully Bayesian techniques such as Gibbs sampler to directly obtain draws of parameters within each step of the chained equations. We justify through theoretical arguments and extensive simulation studies that the proposed imputation methodology has good statistical properties in terms of bias and coverage rates of parameter estimates. An illustration is given in a case study with eight individual participant datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Topological structure of dictionary graphs

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk; Krzemiński, Mark

    2009-09-01

    We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers—that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 103 and 104. A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.

  7. Multiscale Embedded Gene Co-expression Network Analysis

    PubMed Central

    Song, Won-Min; Zhang, Bin

    2015-01-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma. PMID:26618778

  8. Multiscale Embedded Gene Co-expression Network Analysis.

    PubMed

    Song, Won-Min; Zhang, Bin

    2015-11-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma.

  9. Folksonomies and clustering in the collaborative system CiteULike

    NASA Astrophysics Data System (ADS)

    Capocci, Andrea; Caldarelli, Guido

    2008-06-01

    We analyze CiteULike, an online collaborative tagging system where users bookmark and annotate scientific papers. Such a system can be naturally represented as a tri-partite graph whose nodes represent papers, users and tags connected by individual tag assignments. The semantics of tags is studied here, in order to uncover the hidden relationships between tags. We find that the clustering coefficient can be used to analyze the semantical patterns among tags.

  10. Bayesian Nonparametric Inference – Why and How

    PubMed Central

    Müller, Peter; Mitra, Riten

    2013-01-01

    We review inference under models with nonparametric Bayesian (BNP) priors. The discussion follows a set of examples for some common inference problems. The examples are chosen to highlight problems that are challenging for standard parametric inference. We discuss inference for density estimation, clustering, regression and for mixed effects models with random effects distributions. While we focus on arguing for the need for the flexibility of BNP models, we also review some of the more commonly used BNP models, thus hopefully answering a bit of both questions, why and how to use BNP. PMID:24368932

  11. Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman M.; Zaremba, Marek B.

    2003-03-01

    Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.

  12. Graph theoretical analysis of EEG functional connectivity during music perception.

    PubMed

    Wu, Junjie; Zhang, Junsong; Liu, Chu; Liu, Dongwei; Ding, Xiaojun; Zhou, Changle

    2012-11-05

    The present study evaluated the effect of music on large-scale structure of functional brain networks using graph theoretical concepts. While most studies on music perception used Western music as an acoustic stimulus, Guqin music, representative of Eastern music, was selected for this experiment to increase our knowledge of music perception. Electroencephalography (EEG) was recorded from non-musician volunteers in three conditions: Guqin music, noise and silence backgrounds. Phase coherence was calculated in the alpha band and between all pairs of EEG channels to construct correlation matrices. Each resulting matrix was converted into a weighted graph using a threshold, and two network measures: the clustering coefficient and characteristic path length were calculated. Music perception was found to display a higher level mean phase coherence. Over the whole range of thresholds, the clustering coefficient was larger while listening to music, whereas the path length was smaller. Networks in music background still had a shorter characteristic path length even after the correction for differences in mean synchronization level among background conditions. This topological change indicated a more optimal structure under music perception. Thus, prominent small-world properties are confirmed in functional brain networks. Furthermore, music perception shows an increase of functional connectivity and an enhancement of small-world network organizations. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. The geometry of chaotic dynamics — a complex network perspective

    NASA Astrophysics Data System (ADS)

    Donner, R. V.; Heitzig, J.; Donges, J. F.; Zou, Y.; Marwan, N.; Kurths, J.

    2011-12-01

    Recently, several complex network approaches to time series analysis have been developed and applied to study a wide range of model systems as well as real-world data, e.g., geophysical or financial time series. Among these techniques, recurrence-based concepts and prominently ɛ-recurrence networks, most faithfully represent the geometrical fine structure of the attractors underlying chaotic (and less interestingly non-chaotic) time series. In this paper we demonstrate that the well known graph theoretical properties local clustering coefficient and global (network) transitivity can meaningfully be exploited to define two new local and two new global measures of dimension in phase space: local upper and lower clustering dimension as well as global upper and lower transitivity dimension. Rigorous analytical as well as numerical results for self-similar sets and simple chaotic model systems suggest that these measures are well-behaved in most non-pathological situations and that they can be estimated reasonably well using ɛ-recurrence networks constructed from relatively short time series. Moreover, we study the relationship between clustering and transitivity dimensions on the one hand, and traditional measures like pointwise dimension or local Lyapunov dimension on the other hand. We also provide further evidence that the local clustering coefficients, or equivalently the local clustering dimensions, are useful for identifying unstable periodic orbits and other dynamically invariant objects from time series. Our results demonstrate that ɛ-recurrence networks exhibit an important link between dynamical systems and graph theory.

  14. Assessment of phylogenetic sensitivity for reconstructing HIV-1 epidemiological relationships.

    PubMed

    Beloukas, Apostolos; Magiorkinis, Emmanouil; Magiorkinis, Gkikas; Zavitsanou, Asimina; Karamitros, Timokratis; Hatzakis, Angelos; Paraskevis, Dimitrios

    2012-06-01

    Phylogenetic analysis has been extensively used as a tool for the reconstruction of epidemiological relations for research or for forensic purposes. It was our objective to assess the sensitivity of different phylogenetic methods and various phylogenetic programs to reconstruct epidemiological links among HIV-1 infected patients that is the probability to reveal a true transmission relationship. Multiple datasets (90) were prepared consisting of HIV-1 sequences in protease (PR) and partial reverse transcriptase (RT) sampled from patients with documented epidemiological relationship (target population), and from unrelated individuals (control population) belonging to the same HIV-1 subtype as the target population. Each dataset varied regarding the number, the geographic origin and the transmission risk groups of the sequences among the control population. Phylogenetic trees were inferred by neighbor-joining (NJ), maximum likelihood heuristics (hML) and Bayesian methods. All clusters of sequences belonging to the target population were correctly reconstructed by NJ and Bayesian methods receiving high bootstrap and posterior probability (PP) support, respectively. On the other hand, TreePuzzle failed to reconstruct or provide significant support for several clusters; high puzzling step support was associated with the inclusion of control sequences from the same geographic area as the target population. In contrary, all clusters were correctly reconstructed by hML as implemented in PhyML 3.0 receiving high bootstrap support. We report that under the conditions of our study, hML using PhyML, NJ and Bayesian methods were the most sensitive for the reconstruction of epidemiological links mostly from sexually infected individuals. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Combining Computational Methods for Hit to Lead Optimization in Mycobacterium tuberculosis Drug Discovery

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Hobrath, Judith V.; White, E. Lucile; Reynolds, Robert C

    2013-01-01

    Purpose Tuberculosis treatments need to be shorter and overcome drug resistance. Our previous large scale phenotypic high-throughput screening against Mycobacterium tuberculosis (Mtb) has identified 737 active compounds and thousands that are inactive. We have used this data for building computational models as an approach to minimize the number of compounds tested. Methods A cheminformatics clustering approach followed by Bayesian machine learning models (based on publicly available Mtb screening data) was used to illustrate that application of these models for screening set selections can enrich the hit rate. Results In order to explore chemical diversity around active cluster scaffolds of the dose-response hits obtained from our previous Mtb screens a set of 1924 commercially available molecules have been selected and evaluated for antitubercular activity and cytotoxicity using Vero, THP-1 and HepG2 cell lines with 4.3%, 4.2% and 2.7% hit rates, respectively. We demonstrate that models incorporating antitubercular and cytotoxicity data in Vero cells can significantly enrich the selection of non-toxic actives compared to random selection. Across all cell lines, the Molecular Libraries Small Molecule Repository (MLSMR) and cytotoxicity model identified ~10% of the hits in the top 1% screened (>10 fold enrichment). We also showed that seven out of nine Mtb active compounds from different academic published studies and eight out of eleven Mtb active compounds from a pharmaceutical screen (GSK) would have been identified by these Bayesian models. Conclusion Combining clustering and Bayesian models represents a useful strategy for compound prioritization and hit-to lead optimization of antitubercular agents. PMID:24132686

  16. a Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Li, Minglei

    2018-04-01

    Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based over-segmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and will be used as nodes of the graph. The Kruskal coordinates are used to compute edge weights that are proportional to the geodesic distance between nodes. Then we compute the edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths between super voxel nodes on the scene surface. The final segmentation results are generated by clustering similar super voxels and cutting off the weak edges in the graph. The performance of this method was evaluated on LiDAR point clouds for both indoor and outdoor scenes. Additionally, extensive comparisons to state of the art techniques show that our algorithm outperforms on many metrics.

  17. TreePlus: interactive exploration of networks with enhanced tree layouts.

    PubMed

    Lee, Bongshin; Parr, Cynthia S; Plaisant, Catherine; Bederson, Benjamin B; Veksler, Vladislav D; Gray, Wayne D; Kotfila, Christopher

    2006-01-01

    Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food webs, ontologies, and social networks difficult to understand and interact with. We propose a new interactive Visual Analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "Plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface, and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus.

  18. Role Discovery in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    RolX takes the features from Re-FeX or any other feature matrix as input and outputs role assignments (clusters). The output of RolX is a csv file containing the node-role memberships and a csv file containing the role-feature definitions.

  19. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  20. Social deprivation and population density are not associated with small area risk of amyotrophic lateral sclerosis.

    PubMed

    Rooney, James P K; Tobin, Katy; Crampsie, Arlene; Vajda, Alice; Heverin, Mark; McLaughlin, Russell; Staines, Anthony; Hardiman, Orla

    2015-10-01

    Evidence of an association between areal ALS risk and population density has been previously reported. We aim to examine ALS spatial incidence in Ireland using small areas, to compare this analysis with our previous analysis of larger areas and to examine the associations between population density, social deprivation and ALS incidence. Residential area social deprivation has not been previously investigated as a risk factor for ALS. Using the Irish ALS register, we included all cases of ALS diagnosed in Ireland from 1995-2013. 2006 census data was used to calculate age and sex standardised expected cases per small area. Social deprivation was assessed using the pobalHP deprivation index. Bayesian smoothing was used to calculate small area relative risk for ALS, whilst cluster analysis was performed using SaTScan. The effects of population density and social deprivation were tested in two ways: (1) as covariates in the Bayesian spatial model; (2) via post-Bayesian regression. 1701 cases were included. Bayesian smoothed maps of relative risk at small area resolution matched closely to our previous analysis at a larger area resolution. Cluster analysis identified two areas of significant low risk. These areas did not correlate with population density or social deprivation indices. Two areas showing low frequency of ALS have been identified in the Republic of Ireland. These areas do not correlate with population density or residential area social deprivation, indicating that other reasons, such as genetic admixture may account for the observed findings. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Multiscale limited penetrable horizontal visibility graph for analyzing nonlinear time series

    NASA Astrophysics Data System (ADS)

    Gao, Zhong-Ke; Cai, Qing; Yang, Yu-Xuan; Dang, Wei-Dong; Zhang, Shan-Shan

    2016-10-01

    Visibility graph has established itself as a powerful tool for analyzing time series. We in this paper develop a novel multiscale limited penetrable horizontal visibility graph (MLPHVG). We use nonlinear time series from two typical complex systems, i.e., EEG signals and two-phase flow signals, to demonstrate the effectiveness of our method. Combining MLPHVG and support vector machine, we detect epileptic seizures from the EEG signals recorded from healthy subjects and epilepsy patients and the classification accuracy is 100%. In addition, we derive MLPHVGs from oil-water two-phase flow signals and find that the average clustering coefficient at different scales allows faithfully identifying and characterizing three typical oil-water flow patterns. These findings render our MLPHVG method particularly useful for analyzing nonlinear time series from the perspective of multiscale network analysis.

  2. Correlation based networks of equity returns sampled at different time horizons

    NASA Astrophysics Data System (ADS)

    Tumminello, M.; di Matteo, T.; Aste, T.; Mantegna, R. N.

    2007-01-01

    We investigate the planar maximally filtered graphs of the portfolio of the 300 most capitalized stocks traded at the New York Stock Exchange during the time period 2001 2003. Topological properties such as the average length of shortest paths, the betweenness and the degree are computed on different planar maximally filtered graphs generated by sampling the returns at different time horizons ranging from 5 min up to one trading day. This analysis confirms that the selected stocks compose a hierarchical system progressively structuring as the sampling time horizon increases. Finally, a cluster formation, associated to economic sectors, is quantitatively investigated.

  3. On Wiener polarity index of bicyclic networks.

    PubMed

    Ma, Jing; Shi, Yongtang; Wang, Zhen; Yue, Jun

    2016-01-11

    Complex networks are ubiquitous in biological, physical and social sciences. Network robustness research aims at finding a measure to quantify network robustness. A number of Wiener type indices have recently been incorporated as distance-based descriptors of complex networks. Wiener type indices are known to depend both on the network's number of nodes and topology. The Wiener polarity index is also related to the cluster coefficient of networks. In this paper, based on some graph transformations, we determine the sharp upper bound of the Wiener polarity index among all bicyclic networks. These bounds help to understand the underlying quantitative graph measures in depth.

  4. Existence of the Harmonic Measure for Random Walks on Graphs and in Random Environments

    NASA Astrophysics Data System (ADS)

    Boivin, Daniel; Rau, Clément

    2013-01-01

    We give a sufficient condition for the existence of the harmonic measure from infinity of transient random walks on weighted graphs. In particular, this condition is verified by the random conductance model on ℤ d , d≥3, when the conductances are i.i.d. and the bonds with positive conductance percolate. The harmonic measure from infinity also exists for random walks on supercritical clusters of ℤ2. This is proved using results of Barlow (Ann. Probab. 32:3024-3084, 2004) and Barlow and Hambly (Electron. J. Probab. 14(1):1-27, 2009).

  5. An automated method for finding molecular complexes in large protein interaction networks

    PubMed Central

    Bader, Gary D; Hogue, Christopher WV

    2003-01-01

    Background Recent advances in proteomics technologies such as two-hybrid, phage display and mass spectrometry have enabled us to create a detailed map of biomolecular interaction networks. Initial mapping efforts have already produced a wealth of data. As the size of the interaction set increases, databases and computational methods will be required to store, visualize and analyze the information in order to effectively aid in knowledge discovery. Results This paper describes a novel graph theoretic clustering algorithm, "Molecular Complex Detection" (MCODE), that detects densely connected regions in large protein-protein interaction networks that may represent molecular complexes. The method is based on vertex weighting by local neighborhood density and outward traversal from a locally dense seed protein to isolate the dense regions according to given parameters. The algorithm has the advantage over other graph clustering methods of having a directed mode that allows fine-tuning of clusters of interest without considering the rest of the network and allows examination of cluster interconnectivity, which is relevant for protein networks. Protein interaction and complex information from the yeast Saccharomyces cerevisiae was used for evaluation. Conclusion Dense regions of protein interaction networks can be found, based solely on connectivity data, many of which correspond to known protein complexes. The algorithm is not affected by a known high rate of false positives in data from high-throughput interaction techniques. The program is available from . PMID:12525261

  6. A network view on psychiatric disorders: network clusters of symptoms as elementary syndromes of psychopathology.

    PubMed

    Goekoop, Rutger; Goekoop, Jaap G

    2014-01-01

    The vast number of psychopathological syndromes that can be observed in clinical practice can be described in terms of a limited number of elementary syndromes that are differentially expressed. Previous attempts to identify elementary syndromes have shown limitations that have slowed progress in the taxonomy of psychiatric disorders. To examine the ability of network community detection (NCD) to identify elementary syndromes of psychopathology and move beyond the limitations of current classification methods in psychiatry. 192 patients with unselected mental disorders were tested on the Comprehensive Psychopathological Rating Scale (CPRS). Principal component analysis (PCA) was performed on the bootstrapped correlation matrix of symptom scores to extract the principal component structure (PCS). An undirected and weighted network graph was constructed from the same matrix. Network community structure (NCS) was optimized using a previously published technique. In the optimal network structure, network clusters showed a 89% match with principal components of psychopathology. Some 6 network clusters were found, including "Depression", "Mania", "Anxiety", "Psychosis", "Retardation", and "Behavioral Disorganization". Network metrics were used to quantify the continuities between the elementary syndromes. We present the first comprehensive network graph of psychopathology that is free from the biases of previous classifications: a 'Psychopathology Web'. Clusters within this network represent elementary syndromes that are connected via a limited number of bridge symptoms. Many problems of previous classifications can be overcome by using a network approach to psychopathology.

  7. GDPC: Gravitation-based Density Peaks Clustering algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Jianhua; Hao, Dehao; Chen, Yujun; Parmar, Milan; Li, Keqin

    2018-07-01

    The Density Peaks Clustering algorithm, which we refer to as DPC, is a novel and efficient density-based clustering approach, and it is published in Science in 2014. The DPC has advantages of discovering clusters with varying sizes and varying densities, but has some limitations of detecting the number of clusters and identifying anomalies. We develop an enhanced algorithm with an alternative decision graph based on gravitation theory and nearby distance to identify centroids and anomalies accurately. We apply our method to some UCI and synthetic data sets. We report comparative clustering performances using F-Measure and 2-dimensional vision. We also compare our method to other clustering algorithms, such as K-Means, Affinity Propagation (AP) and DPC. We present F-Measure scores and clustering accuracies of our GDPC algorithm compared to K-Means, AP and DPC on different data sets. We show that the GDPC has the superior performance in its capability of: (1) detecting the number of clusters obviously; (2) aggregating clusters with varying sizes, varying densities efficiently; (3) identifying anomalies accurately.

  8. Analysis of Genetic Diversity and Structure Pattern of Indigofera Pseudotinctoria in Karst Habitats of the Wushan Mountains Using AFLP Markers.

    PubMed

    Fan, Yan; Zhang, Chenglin; Wu, Wendan; He, Wei; Zhang, Li; Ma, Xiao

    2017-10-16

    Indigofera pseudotinctoria Mats is an agronomically and economically important perennial legume shrub with a high forage yield, protein content and strong adaptability, which is subject to natural habitat fragmentation and serious human disturbance. Until now, our knowledge of the genetic relationships and intraspecific genetic diversity for its wild collections is still poor, especially at small spatial scales. Here amplified fragment length polymorphism (AFLP) technology was employed for analysis of genetic diversity, differentiation, and structure of 364 genotypes of I. pseudotinctoria from 15 natural locations in Wushan Montain, a highly structured mountain with typical karst landforms in Southwest China. We also tested whether eco-climate factors has affected genetic structure by correlating genetic diversity with habitat features. A total of 515 distinctly scoreable bands were generated, and 324 of them were polymorphic. The polymorphic information content (PIC) ranged from 0.694 to 0.890 with an average of 0.789 per primer pair. On species level, Nei's gene diversity ( H j ), the Bayesian genetic diversity index ( H B ) and the Shannon information index ( I ) were 0.2465, 0.2363 and 0.3772, respectively. The high differentiation among all sampling sites was detected ( F ST = 0.2217, G ST = 0.1746, G' ST = 0.2060, θ B = 0.1844), and instead, gene flow among accessions ( N m = 1.1819) was restricted. The population genetic structure resolved by the UPGMA tree, principal coordinate analysis, and Bayesian-based cluster analyses irrefutably grouped all accessions into two distinct clusters, i.e., lowland and highland groups. The population genetic structure resolved by the UPGMA tree, principal coordinate analysis, and Bayesian-based cluster analyses irrefutably grouped all accessions into two distinct clusters, i.e., lowland and highland groups. This structure pattern may indicate joint effects by the neutral evolution and natural selection. Restricted N m was observed across all accessions, and genetic barriers were detected between adjacent accessions due to specifically geographical landform.

  9. Graph and Network for Model Elicitation (GNOME Phase 2)

    DTIC Science & Technology

    2013-02-01

    10 3.3 GNOME UI Components for NOEM Web Client...20 Figure 17: Sampling in Web -client...the web -client). The server-side service can run and generate data asynchronously, allowing a cluster of servers to run the sampling. Also, a

  10. Visualization and recommendation of large image collections toward effective sensemaking

    NASA Astrophysics Data System (ADS)

    Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis

    2016-03-01

    In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.

  11. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    PubMed

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  12. Analysis of nasopharyngeal carcinoma risk factors with Bayesian networks.

    PubMed

    Aussem, Alex; de Morais, Sérgio Rodrigues; Corbex, Marilys

    2012-01-01

    We propose a new graphical framework for extracting the relevant dietary, social and environmental risk factors that are associated with an increased risk of nasopharyngeal carcinoma (NPC) on a case-control epidemiologic study that consists of 1289 subjects and 150 risk factors. This framework builds on the use of Bayesian networks (BNs) for representing statistical dependencies between the random variables. We discuss a novel constraint-based procedure, called Hybrid Parents and Children (HPC), that builds recursively a local graph that includes all the relevant features statistically associated to the NPC, without having to find the whole BN first. The local graph is afterwards directed by the domain expert according to his knowledge. It provides a statistical profile of the recruited population, and meanwhile helps identify the risk factors associated to NPC. Extensive experiments on synthetic data sampled from known BNs show that the HPC outperforms state-of-the-art algorithms that appeared in the recent literature. From a biological perspective, the present study confirms that chemical products, pesticides and domestic fume intake from incomplete combustion of coal and wood are significantly associated with NPC risk. These results suggest that industrial workers are often exposed to noxious chemicals and poisonous substances that are used in the course of manufacturing. This study also supports previous findings that the consumption of a number of preserved food items, like house made proteins and sheep fat, are a major risk factor for NPC. BNs are valuable data mining tools for the analysis of epidemiologic data. They can explicitly combine both expert knowledge from the field and information inferred from the data. These techniques therefore merit consideration as valuable alternatives to traditional multivariate regression techniques in epidemiologic studies. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. The Gaia-ESO Survey: dynamical models of flattened, rotating globular clusters

    NASA Astrophysics Data System (ADS)

    Jeffreson, S. M. R.; Sanders, J. L.; Evans, N. W.; Williams, A. A.; Gilmore, G. F.; Bayo, A.; Bragaglia, A.; Casey, A. R.; Flaccomio, E.; Franciosini, E.; Hourihane, A.; Jackson, R. J.; Jeffries, R. D.; Jofré, P.; Koposov, S.; Lardo, C.; Lewis, J.; Magrini, L.; Morbidelli, L.; Pancino, E.; Randich, S.; Sacco, G. G.; Worley, C. C.; Zaggia, S.

    2017-08-01

    We present a family of self-consistent axisymmetric rotating globular cluster models which are fitted to spectroscopic data for NGC 362, NGC 1851, NGC 2808, NGC 4372, NGC 5927 and NGC 6752 to provide constraints on their physical and kinematic properties, including their rotation signals. They are constructed by flattening Modified Plummer profiles, which have the same asymptotic behaviour as classical Plummer models, but can provide better fits to young clusters due to a slower turnover in the density profile. The models are in dynamical equilibrium as they depend solely on the action variables. We employ a fully Bayesian scheme to investigate the uncertainty in our model parameters (including mass-to-light ratios and inclination angles) and evaluate the Bayesian evidence ratio for rotating to non-rotating models. We find convincing levels of rotation only in NGC 2808. In the other clusters, there is just a hint of rotation (in particular, NGC 4372 and NGC 5927), as the data quality does not allow us to draw strong conclusions. Where rotation is present, we find that it is confined to the central regions, within radii of R ≤ 2rh. As part of this work, we have developed a novel q-Gaussian basis expansion of the line-of-sight velocity distributions, from which general models can be constructed via interpolation on the basis coefficients.

  14. Defining and Measuring Transnational Social Structures

    ERIC Educational Resources Information Center

    Molina, José Luis; Petermann, Sören; Herz, Andreas

    2015-01-01

    Transnational social fields and transnational social spaces are often used interchangeably to describe and analyze emergent structures of cross-border formations. In this article, we suggest measuring two key aspects of these social structures: embeddedness and span of migrants' personal networks. While clustered graphs allow assessing…

  15. Heuristics for connectivity-based brain parcellation of SMA/pre-SMA through force-directed graph layout.

    PubMed

    Crippa, Alessandro; Cerliani, Leonardo; Nanetti, Luca; Roerdink, Jos B T M

    2011-02-01

    We propose the use of force-directed graph layout as an explorative tool for connectivity-based brain parcellation studies. The method can be used as a heuristic to find the number of clusters intrinsically present in the data (if any) and to investigate their organisation. It provides an intuitive representation of the structure of the data and facilitates interactive exploration of properties of single seed voxels as well as relations among (groups of) voxels. We validate the method on synthetic data sets and we investigate the changes in connectivity in the supplementary motor cortex, a brain region whose parcellation has been previously investigated via connectivity studies. This region is supposed to present two easily distinguishable connectivity patterns, putatively denoted by SMA (supplementary motor area) and pre-SMA. Our method provides insights with respect to the connectivity patterns of the premotor cortex. These present a substantial variation among subjects, and their subdivision into two well-separated clusters is not always straightforward. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Classification of ligand molecules in PDB with fast heuristic graph match algorithm COMPLIG.

    PubMed

    Saito, Mihoko; Takemura, Naomi; Shirai, Tsuyoshi

    2012-12-14

    A fast heuristic graph-matching algorithm, COMPLIG, was devised to classify the small-molecule ligands in the Protein Data Bank (PDB), which are currently not properly classified on structure basis. By concurrently classifying proteins and ligands, we determined the most appropriate parameter for categorizing ligands to be more than 60% identity of atoms and bonds between molecules, and we classified 11,585 types of ligands into 1946 clusters. Although the large clusters were composed of nucleotides or amino acids, a significant presence of drug compounds was also observed. Application of the system to classify the natural ligand status of human proteins in the current database suggested that, at most, 37% of the experimental structures of human proteins were in complex with natural ligands. However, protein homology- and/or ligand similarity-based modeling was implied to provide models of natural interactions for an additional 28% of the total, which might be used to increase the knowledge of intrinsic protein-metabolite interactions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Optimization model for UDWDM-PON deployment based on physical restrictions and asymmetric user's clustering

    NASA Astrophysics Data System (ADS)

    Arévalo, Germán. V.; Hincapié, Roberto C.; Sierra, Javier E.

    2015-09-01

    UDWDM PON is a leading technology oriented to provide ultra-high bandwidth to final users while profiting the physical channels' capability. One of the main drawbacks of UDWDM technique is the fact that the nonlinear effects, like FWM, become stronger due to the close spectral proximity among channels. This work proposes a model for the optimal deployment of this type of networks taking into account the fiber length limitations imposed by physical restrictions related with the fiber's data transmission as well as the users' asymmetric distribution in a provided region. The proposed model employs the data transmission related effects in UDWDM PON as restrictions in the optimization problem and also considers the user's asymmetric clustering and the subdivision of the users region though a Voronoi geometric partition technique. Here it is considered de Voronoi dual graph, it is the Delaunay Triangulation, as the planar graph for resolving the problem related with the minimum weight of the fiber links.

  18. Quasirandom geometric networks from low-discrepancy sequences

    NASA Astrophysics Data System (ADS)

    Estrada, Ernesto

    2017-08-01

    We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.

  19. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Graph theoretic analysis of structural connectivity across the spectrum of Alzheimer's disease: The importance of graph creation methods

    PubMed Central

    Phillips, David J.; McGlaughlin, Alec; Ruth, David; Jager, Leah R.; Soldan, Anja

    2015-01-01

    Graph theory is increasingly being used to study brain connectivity across the spectrum of Alzheimer's disease (AD), but prior findings have been inconsistent, likely reflecting methodological differences. We systematically investigated how methods of graph creation (i.e., type of correlation matrix and edge weighting) affect structural network properties and group differences. We estimated the structural connectivity of brain networks based on correlation maps of cortical thickness obtained from MRI. Four groups were compared: 126 cognitively normal older adults, 103 individuals with Mild Cognitive Impairment (MCI) who retained MCI status for at least 3 years (stable MCI), 108 individuals with MCI who progressed to AD-dementia within 3 years (progressive MCI), and 105 individuals with AD-dementia. Small-world measures of connectivity (characteristic path length and clustering coefficient) differed across groups, consistent with prior studies. Groups were best discriminated by the Randić index, which measures the degree to which highly connected nodes connect to other highly connected nodes. The Randić index differentiated the stable and progressive MCI groups, suggesting that it might be useful for tracking and predicting the progression of AD. Notably, however, the magnitude and direction of group differences in all three measures were dependent on the method of graph creation, indicating that it is crucial to take into account how graphs are constructed when interpreting differences across diagnostic groups and studies. The algebraic connectivity measures showed few group differences, independent of the method of graph construction, suggesting that global connectivity as it relates to node degree is not altered in early AD. PMID:25984446

  1. Analysis of Resting-State fMRI Topological Graph Theory Properties in Methamphetamine Drug Users Applying Box-Counting Fractal Dimension.

    PubMed

    Siyah Mansoory, Meysam; Oghabian, Mohammad Ali; Jafari, Amir Homayoun; Shahbabaie, Alireza

    2017-01-01

    Graph theoretical analysis of functional Magnetic Resonance Imaging (fMRI) data has provided new measures of mapping human brain in vivo. Of all methods to measure the functional connectivity between regions, Linear Correlation (LC) calculation of activity time series of the brain regions as a linear measure is considered the most ubiquitous one. The strength of the dependence obligatory for graph construction and analysis is consistently underestimated by LC, because not all the bivariate distributions, but only the marginals are Gaussian. In a number of studies, Mutual Information (MI) has been employed, as a similarity measure between each two time series of the brain regions, a pure nonlinear measure. Owing to the complex fractal organization of the brain indicating self-similarity, more information on the brain can be revealed by fMRI Fractal Dimension (FD) analysis. In the present paper, Box-Counting Fractal Dimension (BCFD) is introduced for graph theoretical analysis of fMRI data in 17 methamphetamine drug users and 18 normal controls. Then, BCFD performance was evaluated compared to those of LC and MI methods. Moreover, the global topological graph properties of the brain networks inclusive of global efficiency, clustering coefficient and characteristic path length in addict subjects were investigated too. Compared to normal subjects by using statistical tests (P<0.05), topological graph properties were postulated to be disrupted significantly during the resting-state fMRI. Based on the results, analyzing the graph topological properties (representing the brain networks) based on BCFD is a more reliable method than LC and MI.

  2. Quantification of three-dimensional cell-mediated collagen remodeling using graph theory.

    PubMed

    Bilgin, Cemal Cagatay; Lund, Amanda W; Can, Ali; Plopper, George E; Yener, Bülent

    2010-09-30

    Cell cooperation is a critical event during tissue development. We present the first precise metrics to quantify the interaction between mesenchymal stem cells (MSCs) and extra cellular matrix (ECM). In particular, we describe cooperative collagen alignment process with respect to the spatio-temporal organization and function of mesenchymal stem cells in three dimensions. We defined two precise metrics: Collagen Alignment Index and Cell Dissatisfaction Level, for quantitatively tracking type I collagen and fibrillogenesis remodeling by mesenchymal stem cells over time. Computation of these metrics was based on graph theory and vector calculus. The cells and their three dimensional type I collagen microenvironment were modeled by three dimensional cell-graphs and collagen fiber organization was calculated from gradient vectors. With the enhancement of mesenchymal stem cell differentiation, acceleration through different phases was quantitatively demonstrated. The phases were clustered in a statistically significant manner based on collagen organization, with late phases of remodeling by untreated cells clustering strongly with early phases of remodeling by differentiating cells. The experiments were repeated three times to conclude that the metrics could successfully identify critical phases of collagen remodeling that were dependent upon cooperativity within the cell population. Definition of early metrics that are able to predict long-term functionality by linking engineered tissue structure to function is an important step toward optimizing biomaterials for the purposes of regenerative medicine.

  3. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  4. Measuring geographic segregation: a graph-based approach

    NASA Astrophysics Data System (ADS)

    Hong, Seong-Yun; Sadahiro, Yukio

    2014-04-01

    Residential segregation is a multidimensional phenomenon that encompasses several conceptually distinct aspects of geographical separation between populations. While various indices have been developed as a response to different definitions of segregation, the reliance on such single-figure indices could oversimplify the complex, multidimensional phenomena. In this regard, this paper suggests an alternative graph-based approach that provides more detailed information than simple indices: The concentration profile graphically conveys information about how evenly a population group is distributed over the study region, and the spatial proximity profile depicts the degree of clustering across different threshold levels. These graphs can also be summarized into single numbers for comparative purposes, but the interpretation can be more accurate by inspecting the additional information. To demonstrate the use of these methods, the residential patterns of three major ethnic groups in Auckland, namely Māori, Pacific peoples, and Asians, are examined using the 2006 census data.

  5. Descent graphs in pedigree analysis: applications to haplotyping, location scores, and marker-sharing statistics.

    PubMed Central

    Sobel, E.; Lange, K.

    1996-01-01

    The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310

  6. Quantifying randomness in real networks

    NASA Astrophysics Data System (ADS)

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  7. Relaxation dynamics of maximally clustered networks

    NASA Astrophysics Data System (ADS)

    Klaise, Janis; Johnson, Samuel

    2018-01-01

    We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.

  8. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  9. Relation between the Dynamics of Glassy Clusters and Characteristic Features of their Energy Landscape

    NASA Astrophysics Data System (ADS)

    De, Sandip; Schaefer, Bastian; Sadeghi, Ali; Sicher, Michael; Kanhere, D. G.; Goedecker, Stefan

    2014-02-01

    Based on a recently introduced metric for measuring distances between configurations, we introduce distance-energy (DE) plots to characterize the potential energy surface of clusters. Producing such plots is computationally feasible on the density functional level since it requires only a few hundred stable low energy configurations including the global minimum. By using standard criteria based on disconnectivity graphs and the dynamics of Lennard-Jones clusters, we show that the DE plots convey the necessary information about the character of the potential energy surface and allow us to distinguish between glassy and nonglassy systems. We then apply this analysis to real clusters at the density functional theory level and show that both glassy and nonglassy clusters can be found in simulations. It turns out that among our investigated clusters only those can be synthesized experimentally which exhibit a nonglassy landscape.

  10. Spatial analysis of county-based gonorrhoea incidence in mainland China, from 2004 to 2009.

    PubMed

    Yin, Fei; Feng, Zijian; Li, Xiaosong

    2012-07-01

    Gonorrhoea is one of the most common sexually transmissible infections in mainland China. Effective spatial monitoring of gonorrhoea incidence is important for successful implementation of control and prevention programs. The county-level gonorrhoea incidence rates for all of mainland China was monitored through examining spatial patterns. County-level data on gonorrhoea cases between 2004 and 2009 were obtained from the China Information System for Disease Control and Prevention. Bayesian smoothing and exploratory spatial data analysis (ESDA) methods were used to characterise the spatial distribution pattern of gonorrhoea cases. During the 6-year study period, the average annual gonorrhoea incidence was 12.41 cases per 100000 people. Using empirical Bayes smoothed rates, the local Moran test identified one significant single-centre cluster and two significant multi-centre clusters of high gonorrhoea risk (all P-values <0.01). Bayesian smoothing and ESDA methods can assist public health officials in using gonorrhoea surveillance data to identify high risk areas. Allocating more resources to such areas could effectively reduce gonorrhoea incidence.

  11. Optical characterization limits of nanoparticle aggregates at different wavelengths using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Eriçok, Ozan Burak; Ertürk, Hakan

    2018-07-01

    Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.

  12. Decision-theoretic control of EUVE telescope scheduling

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1993-01-01

    This paper describes a decision theoretic scheduler (DTS) designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems and using probabilistic inference to aggregate this information in light of the features of a given problem. The Bayesian Problem-Solver (BPS) introduced a similar approach to solving single agent and adversarial graph search patterns yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.

  13. Experiments with a decision-theoretic scheduler

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Holt, Gerhard; Mayer, Andrew

    1992-01-01

    This paper describes DTS, a decision-theoretic scheduler designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems, and using probabilistic inference to aggregate this information in light of features of a given problem. BPS, the Bayesian Problem-Solver, introduced a similar approach to solving single-agent and adversarial graph search problems, yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.

  14. Two-character motion analysis and synthesis.

    PubMed

    Kwon, Taesoo; Cho, Young-Sang; Park, Sang Il; Shin, Sung Yong

    2008-01-01

    In this paper, we deal with the problem of synthesizing novel motions of standing-up martial arts such as Kickboxing, Karate, and Taekwondo performed by a pair of human-like characters while reflecting their interactions. Adopting an example-based paradigm, we address three non-trivial issues embedded in this problem: motion modeling, interaction modeling, and motion synthesis. For the first issue, we present a semi-automatic motion labeling scheme based on force-based motion segmentation and learning-based action classification. We also construct a pair of motion transition graphs each of which represents an individual motion stream. For the second issue, we propose a scheme for capturing the interactions between two players. A dynamic Bayesian network is adopted to build a motion transition model on top of the coupled motion transition graph that is constructed from an example motion stream. For the last issue, we provide a scheme for synthesizing a novel sequence of coupled motions, guided by the motion transition model. Although the focus of the present work is on martial arts, we believe that the framework of the proposed approach can be conveyed to other two-player motions as well.

  15. Multi-target Detection, Tracking, and Data Association on Road Networks Using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Barkley, Brett E.

    A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.

  16. Phylogenetic relationships of the dwarf boas and a comparison of Bayesian and bootstrap measures of phylogenetic support.

    PubMed

    Wilcox, Thomas P; Zwickl, Derrick J; Heath, Tracy A; Hillis, David M

    2002-11-01

    Four New World genera of dwarf boas (Exiliboa, Trachyboa, Tropidophis, and Ungaliophis) have been placed by many systematists in a single group (traditionally called Tropidophiidae). However, the monophyly of this group has been questioned in several studies. Moreover, the overall relationships among basal snake lineages, including the placement of the dwarf boas, are poorly understood. We obtained mtDNA sequence data for 12S, 16S, and intervening tRNA-val genes from 23 species of snakes representing most major snake lineages, including all four genera of New World dwarf boas. We then examined the phylogenetic position of these species by estimating the phylogeny of the basal snakes. Our phylogenetic analysis suggests that New World dwarf boas are not monophyletic. Instead, we find Exiliboa and Ungaliophis to be most closely related to sand boas (Erycinae), boas (Boinae), and advanced snakes (Caenophidea), whereas Tropidophis and Trachyboa form an independent clade that separated relatively early in snake radiation. Our estimate of snake phylogeny differs significantly in other ways from some previous estimates of snake phylogeny. For instance, pythons do not cluster with boas and sand boas, but instead show a strong relationship with Loxocemus and Xenopeltis. Additionally, uropeltids cluster strongly with Cylindrophis, and together are embedded in what has previously been considered the macrostomatan radiation. These relationships are supported by both bootstrapping (parametric and nonparametric approaches) and Bayesian analysis, although Bayesian support values are consistently higher than those obtained from nonparametric bootstrapping. Simulations show that Bayesian support values represent much better estimates of phylogenetic accuracy than do nonparametric bootstrap support values, at least under the conditions of our study. Copyright 2002 Elsevier Science (USA)

  17. A Graphical Modelling Approach to the Dissection of Highly Correlated Transcription Factor Binding Site Profiles

    PubMed Central

    Stojnic, Robert; Fu, Audrey Qiuyan; Adryan, Boris

    2012-01-01

    Inferring the combinatorial regulatory code of transcription factors (TFs) from genome-wide TF binding profiles is challenging. A major reason is that TF binding profiles significantly overlap and are therefore highly correlated. Clustered occurrence of multiple TFs at genomic sites may arise from chromatin accessibility and local cooperation between TFs, or binding sites may simply appear clustered if the profiles are generated from diverse cell populations. Overlaps in TF binding profiles may also result from measurements taken at closely related time intervals. It is thus of great interest to distinguish TFs that directly regulate gene expression from those that are indirectly associated with gene expression. Graphical models, in particular Bayesian networks, provide a powerful mathematical framework to infer different types of dependencies. However, existing methods do not perform well when the features (here: TF binding profiles) are highly correlated, when their association with the biological outcome is weak, and when the sample size is small. Here, we develop a novel computational method, the Neighbourhood Consistent PC (NCPC) algorithms, which deal with these scenarios much more effectively than existing methods do. We further present a novel graphical representation, the Direct Dependence Graph (DDGraph), to better display the complex interactions among variables. NCPC and DDGraph can also be applied to other problems involving highly correlated biological features. Both methods are implemented in the R package ddgraph, available as part of Bioconductor (http://bioconductor.org/packages/2.11/bioc/html/ddgraph.html). Applied to real data, our method identified TFs that specify different classes of cis-regulatory modules (CRMs) in Drosophila mesoderm differentiation. Our analysis also found depletion of the early transcription factor Twist binding at the CRMs regulating expression in visceral and somatic muscle cells at later stages, which suggests a CRM-specific repression mechanism that so far has not been characterised for this class of mesodermal CRMs. PMID:23144600

  18. SBEToolbox: A Matlab Toolbox for Biological Network Analysis

    PubMed Central

    Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J.

    2013-01-01

    We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases. PMID:24027418

  19. SBEToolbox: A Matlab Toolbox for Biological Network Analysis.

    PubMed

    Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J

    2013-01-01

    We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases.

  20. New Methods of Spectral-Density Based Graph Construction and Their Application to Hyperspectral Image Analysis

    NASA Astrophysics Data System (ADS)

    Stevens, Jeffrey

    The past decade has seen the emergence of many hyperspectral image (HSI) analysis algorithms based on graph theory and derived manifold-coordinates. Yet, despite the growing number of algorithms, there has been limited study of the graphs constructed from spectral data themselves. Which graphs are appropriate for various HSI analyses--and why? This research aims to begin addressing these questions as the performance of graph-based techniques is inextricably tied to the graphical model constructed from the spectral data. We begin with a literature review providing a survey of spectral graph construction techniques currently used by the hyperspectral community, starting with simple constructs demonstrating basic concepts and then incrementally adding components to derive more complex approaches. Throughout this development, we discuss algorithm advantages and disadvantages for different types of hyperspectral analysis. A focus is provided on techniques influenced by spectral density through which the concept of community structure arises. Through the use of simulated and real HSI data, we demonstrate density-based edge allocation produces more uniform nearest neighbor lists than non-density based techniques through increasing the number of intracluster edges, facilitating higher k-nearest neighbor (k-NN) classification performance. Imposing the common mutuality constraint to symmetrify adjacency matrices is demonstrated to be beneficial in most circumstances, especially in rural (less cluttered) scenes. Many complex adaptive edge-reweighting techniques are shown to slightly degrade nearest-neighbor list characteristics. Analysis suggests this condition is possibly attributable to the validity of characterizing spectral density by a single variable representing data scale for each pixel. Additionally, it is shown that imposing mutuality hurts the performance of adaptive edge-allocation techniques or any technique that aims to assign a low number of edges (<10) to any pixel. A simple k bias addresses this problem. Many of the adaptive edge-reweighting techniques are based on the concept of codensity, so we explore codensity properties as they relate to density-based edge reweighting. We find that codensity may not be the best estimator of local scale due to variations in cluster density, so we introduce and compare two inherently density-weighted graph construction techniques from the data mining literature: shared nearest neighbors (SNN) and mutual proximity (MP). MP and SNN are not reliant upon a codensity measure, hence are not susceptible to its shortcomings. Neither has been used for hyperspectral analyses, so this presents the first study of these techniques on HSI data. We demonstrate MP and SNN can offer better performance, but in general none of the reweighting techniques improve the quality of these spectral graphs in our neighborhood structure tests. As such, these complex adaptive edge-reweighting techniques may need to be modified to increase their effectiveness. During this investigation, we probe deeper into properties of high-dimensional data and introduce the concept of concentration of measure (CoM)--the degradation in the efficacy of many common distance measures with increasing dimensionality--as it relates to spectral graph construction. CoM exists in pairwise distances between HSI pixels, but not to the degree experienced in random data of the same extrinsic dimension; a characteristic we demonstrate is due to the rich correlation and cluster structure present in HSI data. CoM can lead to hubness--a condition wherein some nodes have short distances (high similarities) to an exceptionally large number of nodes. We study hub presence in 49 HSI datasets of varying resolutions, altitudes, and spectral bands to demonstrate hubness effects are negligible in a k-NN classification example (generalized counting scenarios), but we note its impact on methods that use edge weights to derive manifold coordinates or splitting clusters based on spectral graph theory requires more investigation. Many of these new graph-related quantities can be exploited to demonstrate new techniques for HSI classification and anomaly detection. We present an initial exploration into this relatively new and exciting field based on an enhanced Schroedinger Eigenmap classification example and compare results to the current state-of-the-art approach. We produce equivalent results, but demonstrate different types of misclassifications, opening the door to combine the best of both approaches to achieve truly superior performance. A separate less mature hubness-assisted anomaly detector (HAAD) is also presented.

  1. On the structure of Bayesian network for Indonesian text document paraphrase identification

    NASA Astrophysics Data System (ADS)

    Prayogo, Ario Harry; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Paraphrase identification is an important process within natural language processing. The idea is to automatically recognize phrases that have different forms but contain same meanings. For examples if we input query “causing fire hazard”, then the computer has to recognize this query that this query has same meaning as “the cause of fire hazard. Paraphrasing is an activity that reveals the meaning of an expression, writing, or speech using different words or forms, especially to achieve greater clarity. In this research we will focus on classifying two Indonesian sentences whether it is a paraphrase to each other or not. There are four steps in this research, first is preprocessing, second is feature extraction, third is classifier building, and the last is performance evaluation. Preprocessing consists of tokenization, non-alphanumerical removal, and stemming. After preprocessing we will conduct feature extraction in order to build new features from given dataset. There are two kinds of features in the research, syntactic features and semantic features. Syntactic features consist of normalized levenshtein distance feature, term-frequency based cosine similarity feature, and LCS (Longest Common Subsequence) feature. Semantic features consist of Wu and Palmer feature and Shortest Path Feature. We use Bayesian Networks as the method of training the classifier. Parameter estimation that we use is called MAP (Maximum A Posteriori). For structure learning of Bayesian Networks DAG (Directed Acyclic Graph), we use BDeu (Bayesian Dirichlet equivalent uniform) scoring function and for finding DAG with the best BDeu score, we use K2 algorithm. In evaluation step we perform cross-validation. The average result that we get from testing the classifier as follows: Precision 75.2%, Recall 76.5%, F1-Measure 75.8% and Accuracy 75.6%.

  2. Graph-theoretic quantum system modelling for neuronal microtubules as hierarchical clustered quantum Hopfield networks

    NASA Astrophysics Data System (ADS)

    Srivastava, D. P.; Sahni, V.; Satsangi, P. S.

    2014-08-01

    Graph-theoretic quantum system modelling (GTQSM) is facilitated by considering the fundamental unit of quantum computation and information, viz. a quantum bit or qubit as a basic building block. Unit directional vectors "ket 0" and "ket 1" constitute two distinct fundamental quantum across variable orthonormal basis vectors, for the Hilbert space, specifying the direction of propagation of information, or computation data, while complementary fundamental quantum through, or flow rate, variables specify probability parameters, or amplitudes, as surrogates for scalar quantum information measure (von Neumann entropy). This paper applies GTQSM in continuum of protein heterodimer tubulin molecules of self-assembling polymers, viz. microtubules in the brain as a holistic system of interacting components representing hierarchical clustered quantum Hopfield network, hQHN, of networks. The quantum input/output ports of the constituent elemental interaction components, or processes, of tunnelling interactions and Coulombic bidirectional interactions are in cascade and parallel interconnections with each other, while the classical output ports of all elemental components are interconnected in parallel to accumulate micro-energy functions generated in the system as Hamiltonian, or Lyapunov, energy function. The paper presents an insight, otherwise difficult to gain, for the complex system of systems represented by clustered quantum Hopfield network, hQHN, through the application of GTQSM construct.

  3. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  4. Clinical correlates of graph theory findings in temporal lobe epilepsy.

    PubMed

    Haneef, Zulfi; Chiang, Sharon

    2014-11-01

    Temporal lobe epilepsy (TLE) is considered a brain network disorder, additionally representing the most common form of pharmaco-resistant epilepsy in adults. There is increasing evidence that seizures in TLE arise from abnormal epileptogenic networks, which extend beyond the clinico-radiologically determined epileptogenic zone and may contribute to the failure rate of 30-50% following epilepsy surgery. Graph theory allows for a network-based representation of TLE brain networks using several neuroimaging and electrophysiologic modalities, and has potential to provide clinicians with clinically useful biomarkers for diagnostic and prognostic purposes. We performed a review of the current state of graph theory findings in TLE as they pertain to localization of the epileptogenic zone, prediction of pre- and post-surgical seizure frequency and cognitive performance, and monitoring cognitive decline in TLE. Although different neuroimaging and electrophysiologic modalities have yielded occasionally conflicting results, several potential biomarkers have been characterized for identifying the epileptogenic zone, pre-/post-surgical seizure prediction, and assessing cognitive performance. For localization, graph theory measures of centrality have shown the most potential, including betweenness centrality, outdegree, and graph index complexity, whereas for prediction of seizure frequency, measures of synchronizability have shown the most potential. The utility of clustering coefficient and characteristic path length for assessing cognitive performance in TLE is also discussed. Future studies integrating data from multiple modalities and testing predictive models are needed to clarify findings and develop graph theory for its clinical utility. Copyright © 2014 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  5. Clinical correlates of graph theory findings in temporal lobe epilepsy

    PubMed Central

    Haneef, Zulfi; Chiang, Sharon

    2014-01-01

    Purpose Temporal lobe epilepsy (TLE) is considered a brain network disorder, additionally representing the most common form of pharmaco-resistant epilepsy in adults. There is increasing evidence that seizures in TLE arise from abnormal epileptogenic networks, which extend beyond the clinico-radiologically determined epileptogenic zone and may contribute to the failure rate of 30–50% following epilepsy surgery. Graph theory allows for a network-based representation of TLE brain networks using several neuroimaging and electrophysiologic modalities, and has potential to provide clinicians with clinically useful biomarkers for diagnostic and prognostic purposes. Methods We performed a review of the current state of graph theory findings in TLE as they pertain to localization of the epileptogenic zone, prediction of pre- and post-surgical seizure frequency and cognitive performance, and monitoring cognitive decline in TLE. Results Although different neuroimaging and electrophysiologic modalities have yielded occasionally conflicting results, several potential biomarkers have been characterized for identifying the epileptogenic zone, pre-/post-surgical seizure prediction, and assessing cognitive performance. For localization, graph theory measures of centrality have shown the most potential, including betweenness centrality, outdegree, and graph index complexity, whereas for prediction of seizure frequency, measures of synchronizability have shown the most potential. The utility of clustering coefficient and characteristic path length for assessing cognitive performance in TLE is also discussed. Conclusions Future studies integrating data from multiple modalities and testing predictive models are needed to clarify findings and develop graph theory for its clinical utility. PMID:25127370

  6. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    PubMed Central

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  7. Quantile regression and Bayesian cluster detection to identify radon prone areas.

    PubMed

    Sarra, Annalina; Fontanella, Lara; Valentini, Pasquale; Palermi, Sergio

    2016-11-01

    Albeit the dominant source of radon in indoor environments is the geology of the territory, many studies have demonstrated that indoor radon concentrations also depend on dwelling-specific characteristics. Following a stepwise analysis, in this study we propose a combined approach to delineate radon prone areas. We first investigate the impact of various building covariates on indoor radon concentrations. To achieve a more complete picture of this association, we exploit the flexible formulation of a Bayesian spatial quantile regression, which is also equipped with parameters that controls the spatial dependence across data. The quantitative knowledge of the influence of each significant building-specific factor on the measured radon levels is employed to predict the radon concentrations that would have been found if the sampled buildings had possessed standard characteristics. Those normalised radon measures should reflect the geogenic radon potential of the underlying ground, which is a quantity directly related to the geological environment. The second stage of the analysis is aimed at identifying radon prone areas, and to this end, we adopt a Bayesian model for spatial cluster detection using as reference unit the building with standard characteristics. The case study is based on a data set of more than 2000 indoor radon measures, available for the Abruzzo region (Central Italy) and collected by the Agency of Environmental Protection of Abruzzo, during several indoor radon monitoring surveys. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Bayesian Nonparametric Ordination for the Analysis of Microbial Communities.

    PubMed

    Ren, Boyu; Bacallado, Sergio; Favaro, Stefano; Holmes, Susan; Trippa, Lorenzo

    2017-01-01

    Human microbiome studies use sequencing technologies to measure the abundance of bacterial species or Operational Taxonomic Units (OTUs) in samples of biological material. Typically the data are organized in contingency tables with OTU counts across heterogeneous biological samples. In the microbial ecology community, ordination methods are frequently used to investigate latent factors or clusters that capture and describe variations of OTU counts across biological samples. It remains important to evaluate how uncertainty in estimates of each biological sample's microbial distribution propagates to ordination analyses, including visualization of clusters and projections of biological samples on low dimensional spaces. We propose a Bayesian analysis for dependent distributions to endow frequently used ordinations with estimates of uncertainty. A Bayesian nonparametric prior for dependent normalized random measures is constructed, which is marginally equivalent to the normalized generalized Gamma process, a well-known prior for nonparametric analyses. In our prior, the dependence and similarity between microbial distributions is represented by latent factors that concentrate in a low dimensional space. We use a shrinkage prior to tune the dimensionality of the latent factors. The resulting posterior samples of model parameters can be used to evaluate uncertainty in analyses routinely applied in microbiome studies. Specifically, by combining them with multivariate data analysis techniques we can visualize credible regions in ecological ordination plots. The characteristics of the proposed model are illustrated through a simulation study and applications in two microbiome datasets.

  9. Event Networks and the Identification of Crime Pattern Motifs

    PubMed Central

    2015-01-01

    In this paper we demonstrate the use of network analysis to characterise patterns of clustering in spatio-temporal events. Such clustering is of both theoretical and practical importance in the study of crime, and forms the basis for a number of preventative strategies. However, existing analytical methods show only that clustering is present in data, while offering little insight into the nature of the patterns present. Here, we show how the classification of pairs of events as close in space and time can be used to define a network, thereby generalising previous approaches. The application of graph-theoretic techniques to these networks can then offer significantly deeper insight into the structure of the data than previously possible. In particular, we focus on the identification of network motifs, which have clear interpretation in terms of spatio-temporal behaviour. Statistical analysis is complicated by the nature of the underlying data, and we provide a method by which appropriate randomised graphs can be generated. Two datasets are used as case studies: maritime piracy at the global scale, and residential burglary in an urban area. In both cases, the same significant 3-vertex motif is found; this result suggests that incidents tend to occur not just in pairs, but in fact in larger groups within a restricted spatio-temporal domain. In the 4-vertex case, different motifs are found to be significant in each case, suggesting that this technique is capable of discriminating between clustering patterns at a finer granularity than previously possible. PMID:26605544

  10. A Network View on Psychiatric Disorders: Network Clusters of Symptoms as Elementary Syndromes of Psychopathology

    PubMed Central

    Goekoop, Rutger; Goekoop, Jaap G.

    2014-01-01

    Introduction The vast number of psychopathological syndromes that can be observed in clinical practice can be described in terms of a limited number of elementary syndromes that are differentially expressed. Previous attempts to identify elementary syndromes have shown limitations that have slowed progress in the taxonomy of psychiatric disorders. Aim To examine the ability of network community detection (NCD) to identify elementary syndromes of psychopathology and move beyond the limitations of current classification methods in psychiatry. Methods 192 patients with unselected mental disorders were tested on the Comprehensive Psychopathological Rating Scale (CPRS). Principal component analysis (PCA) was performed on the bootstrapped correlation matrix of symptom scores to extract the principal component structure (PCS). An undirected and weighted network graph was constructed from the same matrix. Network community structure (NCS) was optimized using a previously published technique. Results In the optimal network structure, network clusters showed a 89% match with principal components of psychopathology. Some 6 network clusters were found, including "DEPRESSION", "MANIA", “ANXIETY”, "PSYCHOSIS", "RETARDATION", and "BEHAVIORAL DISORGANIZATION". Network metrics were used to quantify the continuities between the elementary syndromes. Conclusion We present the first comprehensive network graph of psychopathology that is free from the biases of previous classifications: a ‘Psychopathology Web’. Clusters within this network represent elementary syndromes that are connected via a limited number of bridge symptoms. Many problems of previous classifications can be overcome by using a network approach to psychopathology. PMID:25427156

  11. Stepwise and stagewise approaches for spatial cluster detection

    PubMed Central

    Xu, Jiale

    2016-01-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273

  12. Stepwise and stagewise approaches for spatial cluster detection.

    PubMed

    Xu, Jiale; Gangnon, Ronald E

    2016-05-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The Graphic Representation of Structure in Similarity/Dissimilarity Matrices: Alternative Methods.

    ERIC Educational Resources Information Center

    Rudnitsky, Alan N.

    Three approaches to the graphic representation of similarity and dissimilarity matrices are compared and contrasted. Specifically, Kruskal's multidimensional scaling, Johnson's hierarchical clustering, and Waern's graphing techniques are employed to depict, in two dimensions, data representing the structure of a set of botanical concepts. Each of…

  14. Graph-based Models for Data and Decision Making

    DTIC Science & Technology

    2016-02-16

    istribut ions. This is required for the UCIP baseline model estima tes. T he second necessary condition is one in which all the sources of informat ...relative capacity learni ng pallern, re- sulting from a strong speed-accuracy trade- oil The second AND cluster in the middle of the dendrogram contains

  15. Clinical Outcome Prediction in Aneurysmal Subarachnoid Hemorrhage Using Bayesian Neural Networks with Fuzzy Logic Inferences

    PubMed Central

    Lo, Benjamin W. Y.; Macdonald, R. Loch; Baker, Andrew; Levine, Mitchell A. H.

    2013-01-01

    Objective. The novel clinical prediction approach of Bayesian neural networks with fuzzy logic inferences is created and applied to derive prognostic decision rules in cerebral aneurysmal subarachnoid hemorrhage (aSAH). Methods. The approach of Bayesian neural networks with fuzzy logic inferences was applied to data from five trials of Tirilazad for aneurysmal subarachnoid hemorrhage (3551 patients). Results. Bayesian meta-analyses of observational studies on aSAH prognostic factors gave generalizable posterior distributions of population mean log odd ratios (ORs). Similar trends were noted in Bayesian and linear regression ORs. Significant outcome predictors include normal motor response, cerebral infarction, history of myocardial infarction, cerebral edema, history of diabetes mellitus, fever on day 8, prior subarachnoid hemorrhage, admission angiographic vasospasm, neurological grade, intraventricular hemorrhage, ruptured aneurysm size, history of hypertension, vasospasm day, age and mean arterial pressure. Heteroscedasticity was present in the nontransformed dataset. Artificial neural networks found nonlinear relationships with 11 hidden variables in 1 layer, using the multilayer perceptron model. Fuzzy logic decision rules (centroid defuzzification technique) denoted cut-off points for poor prognosis at greater than 2.5 clusters. Discussion. This aSAH prognostic system makes use of existing knowledge, recognizes unknown areas, incorporates one's clinical reasoning, and compensates for uncertainty in prognostication. PMID:23690884

  16. As-built design specification for proportion estimate software subsystem

    NASA Technical Reports Server (NTRS)

    Obrien, S. (Principal Investigator)

    1980-01-01

    The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.

  17. A note on the stability and discriminability of graph-based features for classification problems in digital pathology

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Xu, Jun; Madabhushi, Anant

    2015-01-01

    Nuclear architecture or the spatial arrangement of individual cancer nuclei on histopathology images has been shown to be associated with different grades and differential risk for a number of solid tumors such as breast, prostate, and oropharyngeal. Graph-based representations of individual nuclei (nuclei representing the graph nodes) allows for mining of quantitative metrics to describe tumor morphology. These graph features can be broadly categorized into global and local depending on the type of graph construction method. While a number of local graph (e.g. Cell Cluster Graphs) and global graph (e.g. Voronoi, Delaunay Triangulation, Minimum Spanning Tree) features have been shown to associated with cancer grade, risk, and outcome for different cancer types, the sensitivity of the preceding segmentation algorithms in identifying individual nuclei can have a significant bearing on the discriminability of the resultant features. This therefore begs the question as to which features while being discriminative of cancer grade and aggressiveness are also the most resilient to the segmentation errors. These properties are particularly desirable in the context of digital pathology images, where the method of slide preparation, staining, and type of nuclear segmentation algorithm employed can all dramatically affect the quality of the nuclear graphs and corresponding features. In this paper we evaluated the trade off between discriminability and stability of both global and local graph-based features in conjunction with a few different segmentation algorithms and in the context of two different histopathology image datasets of breast cancer from whole-slide images (WSI) and tissue microarrays (TMA). Specifically in this paper we investigate a few different performance measures including stability, discriminability and stability vs discriminability trade off, all of which are based on p-values from the Kruskal-Wallis one-way analysis of variance for local and global graph features. Apart from identifying the set of local and global features that satisfied the trade off between stability and discriminability, our most interesting finding was that a simple segmentation method was sufficient to identify the most discriminant features for invasive tumour detection in TMAs, whereas for tumour grading in WSI, the graph based features were more sensitive to the accuracy of the segmentation algorithm employed.

  18. Three sympatric clusters of the malaria vector Anopheles culicifacies E (Diptera: Culicidae) detected in Sri Lanka.

    PubMed

    Harischandra, Iresha Nilmini; Dassanayake, Ranil Samantha; De Silva, Bambaranda Gammacharige Don Nissanka Kolitha

    2016-01-04

    The disease re-emergence threat from the major malaria vector in Sri Lanka, Anopheles culicifacies, is currently increasing. To predict malaria vector dynamics, knowledge of population genetics and gene flow is required, but this information is unavailable for Sri Lanka. This study was carried out to determine the population structure of An. culicifacies E in Sri Lanka. Eight microsatellite markers were used to examine An. culicifacies E collected from six sites in Sri Lanka during 2010-2012. Standard population genetic tests and analyses, genetic differentiation, Hardy-Weinberg equilibrium, linkage disequilibrium, Bayesian cluster analysis, AMOVA, SAMOVA and isolation-by-distance were conducted using five polymorphic loci. Five microsatellite loci were highly polymorphic with high allelic richness. Hardy-Weinberg Equilibrium (HWE) was significantly rejected for four loci with positive F(IS) values in the pooled population (p < 0.0100). Three loci showed high deviations in all sites except Kataragama, which was in agreement with HWE for all loci except one locus (p < 0.0016). Observed heterozygosity was less than the expected values for all sites except Kataragama, where reported negative F(IS) values indicated a heterozygosity excess. Genetic differentiation was observed for all sampling site pairs and was not supported by the isolation by distance model. Bayesian clustering analysis identified the presence of three sympatric clusters (gene pools) in the studied population. Significant genetic differentiation was detected in cluster pairs with low gene flow and isolation by distance was not detected between clusters. Furthermore, the results suggested the presence of a barrier to gene flow that divided the populations into two parts with the central hill region of Sri Lanka as the dividing line. Three sympatric clusters were detected among An. culicifacies E specimens isolated in Sri Lanka. There was no effect of geographic distance on genetic differentiation and the central mountain ranges in Sri Lanka appeared to be a barrier to gene flow.

  19. Bayesian investigation of isochrone consistency using the old open cluster NGC 188

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Shane; Courteau, Stéphane; Von Hippel, Ted

    2015-03-01

    This paper provides a detailed comparison of the differences in parameters derived for a star cluster from its color–magnitude diagrams (CMDs) depending on the filters and models used. We examine the consistency and reliability of fitting three widely used stellar evolution models to 15 combinations of optical and near-IR photometry for the old open cluster NGC 188. The optical filter response curves match those of theoretical systems and are thus not the source of fit inconsistencies. NGC 188 is ideally suited to this study thanks to a wide variety of high-quality photometry and available proper motions and radial velocities thatmore » enable us to remove non-cluster members and many binaries. Our Bayesian fitting technique yields inferred values of age, metallicity, distance modulus, and absorption as a function of the photometric band combinations and stellar models. We show that the historically favored three-band combinations of UBV and VRI can be meaningfully inconsistent with each other and with longer baseline data sets such as UBVRIJHK{sub S}. Differences among model sets can also be substantial. For instance, fitting Yi et al. (2001) and Dotter et al. (2008) models to UBVRIJHK{sub S} photometry for NGC 188 yields the following cluster parameters: age = (5.78 ± 0.03, 6.45 ± 0.04) Gyr, [Fe/H] = (+0.125 ± 0.003, −0.077 ± 0.003) dex, (m−M){sub V} = (11.441 ± 0.007, 11.525 ± 0.005) mag, and A{sub V} = (0.162 ± 0.003, 0.236 ± 0.003) mag, respectively. Within the formal fitting errors, these two fits are substantially and statistically different. Such differences among fits using different filters and models are a cautionary tale regarding our current ability to fit star cluster CMDs. Additional modeling of this kind, with more models and star clusters, and future Gaia parallaxes are critical for isolating and quantifying the most relevant uncertainties in stellar evolutionary models.« less

  20. Phylodynamic Analysis Reveals CRF01_AE Dissemination between Japan and Neighboring Asian Countries and the Role of Intravenous Drug Use in Transmission

    PubMed Central

    Shiino, Teiichiro; Hattori, Junko; Yokomaku, Yoshiyuki; Iwatani, Yasumasa; Sugiura, Wataru

    2014-01-01

    Background One major circulating HIV-1 subtype in Southeast Asian countries is CRF01_AE, but little is known about its epidemiology in Japan. We conducted a molecular phylodynamic study of patients newly diagnosed with CRF01_AE from 2003 to 2010. Methods Plasma samples from patients registered in Japanese Drug Resistance HIV-1 Surveillance Network were analyzed for protease-reverse transcriptase sequences; all sequences undergo subtyping and phylogenetic analysis using distance-matrix-based, maximum likelihood and Bayesian coalescent Markov Chain Monte Carlo (MCMC) phylogenetic inferences. Transmission clusters were identified using interior branch test and depth-first searches for sub-tree partitions. Times of most recent common ancestor (tMRCAs) of significant clusters were estimated using Bayesian MCMC analysis. Results Among 3618 patient registered in our network, 243 were infected with CRF01_AE. The majority of individuals with CRF01_AE were Japanese, predominantly male, and reported heterosexual contact as their risk factor. We found 5 large clusters with ≥5 members and 25 small clusters consisting of pairs of individuals with highly related CRF01_AE strains. The earliest cluster showed a tMRCA of 1996, and consisted of individuals with their known risk as heterosexual contacts. The other four large clusters showed later tMRCAs between 2000 and 2002 with members including intravenous drug users (IVDU) and non-Japanese, but not men who have sex with men (MSM). In contrast, small clusters included a high frequency of individuals reporting MSM risk factors. Phylogenetic analysis also showed that some individuals infected with HIV strains spread in East and South-eastern Asian countries. Conclusions Introduction of CRF01_AE viruses into Japan is estimated to have occurred in the 1990s. CFR01_AE spread via heterosexual behavior, then among persons connected with non-Japanese, IVDU, and MSM. Phylogenetic analysis demonstrated that some viral variants are largely restricted to Japan, while others have a broad geographic distribution. PMID:25025900

  1. Clustering Categorical Data Using Community Detection Techniques

    PubMed Central

    2017-01-01

    With the advent of the k-modes algorithm, the toolbox for clustering categorical data has an efficient tool that scales linearly in the number of data items. However, random initialization of cluster centers in k-modes makes it hard to reach a good clustering without resorting to many trials. Recently proposed methods for better initialization are deterministic and reduce the clustering cost considerably. A variety of initialization methods differ in how the heuristics chooses the set of initial centers. In this paper, we address the clustering problem for categorical data from the perspective of community detection. Instead of initializing k modes and running several iterations, our scheme, CD-Clustering, builds an unweighted graph and detects highly cohesive groups of nodes using a fast community detection technique. The top-k detected communities by size will define the k modes. Evaluation on ten real categorical datasets shows that our method outperforms the existing initialization methods for k-modes in terms of accuracy, precision, and recall in most of the cases. PMID:29430249

  2. Genetic Population Structure Analysis in New Hampshire Reveals Eastern European Ancestry

    PubMed Central

    Sloan, Chantel D.; Andrew, Angeline D.; Duell, Eric J.; Williams, Scott M.; Karagas, Margaret R.; Moore, Jason H.

    2009-01-01

    Genetic structure due to ancestry has been well documented among many divergent human populations. However, the ability to associate ancestry with genetic substructure without using supervised clustering has not been explored in more presumably homogeneous and admixed US populations. The goal of this study was to determine if genetic structure could be detected in a United States population from a single state where the individuals have mixed European ancestry. Using Bayesian clustering with a set of 960 single nucleotide polymorphisms (SNPs) we found evidence of population stratification in 864 individuals from New Hampshire that can be used to differentiate the population into six distinct genetic subgroups. We then correlated self-reported ancestry of the individuals with the Bayesian clustering results. Finnish and Russian/Polish/Lithuanian ancestries were most notably found to be associated with genetic substructure. The ancestral results were further explained and substantiated using New Hampshire census data from 1870 to 1930 when the largest waves of European immigrants came to the area. We also discerned distinct patterns of linkage disequilibrium (LD) between the genetic groups in the growth hormone receptor gene (GHR). To our knowledge, this is the first time such an investigation has uncovered a strong link between genetic structure and ancestry in what would otherwise be considered a homogenous US population. PMID:19738909

  3. Genetic population structure analysis in New Hampshire reveals Eastern European ancestry.

    PubMed

    Sloan, Chantel D; Andrew, Angeline D; Duell, Eric J; Williams, Scott M; Karagas, Margaret R; Moore, Jason H

    2009-09-07

    Genetic structure due to ancestry has been well documented among many divergent human populations. However, the ability to associate ancestry with genetic substructure without using supervised clustering has not been explored in more presumably homogeneous and admixed US populations. The goal of this study was to determine if genetic structure could be detected in a United States population from a single state where the individuals have mixed European ancestry. Using Bayesian clustering with a set of 960 single nucleotide polymorphisms (SNPs) we found evidence of population stratification in 864 individuals from New Hampshire that can be used to differentiate the population into six distinct genetic subgroups. We then correlated self-reported ancestry of the individuals with the Bayesian clustering results. Finnish and Russian/Polish/Lithuanian ancestries were most notably found to be associated with genetic substructure. The ancestral results were further explained and substantiated using New Hampshire census data from 1870 to 1930 when the largest waves of European immigrants came to the area. We also discerned distinct patterns of linkage disequilibrium (LD) between the genetic groups in the growth hormone receptor gene (GHR). To our knowledge, this is the first time such an investigation has uncovered a strong link between genetic structure and ancestry in what would otherwise be considered a homogenous US population.

  4. Postglacial recolonization history of the European crabapple (Malus sylvestris Mill.), a wild contributor to the domesticated apple.

    PubMed

    Cornille, A; Giraud, T; Bellard, C; Tellier, A; Le Cam, B; Smulders, M J M; Kleinschmit, J; Roldan-Ruiz, I; Gladieux, P

    2013-04-01

    Understanding the way in which the climatic oscillations of the Quaternary Period have shaped the distribution and genetic structure of extant tree species provides insight into the processes driving species diversification, distribution and survival. Deciphering the genetic consequences of past climatic change is also critical for the conservation and sustainable management of forest and tree genetic resources, a timely endeavour as the Earth heads into a period of fast climate change. We used a combination of genetic data and ecological niche models to investigate the historical patterns of biogeographic range expansion of a wild fruit tree, the European crabapple (Malus sylvestris), a wild contributor to the domesticated apple. Both climatic predictions for the last glacial maximum and analyses of microsatellite variation indicated that M. sylvestris experienced range contraction and fragmentation. Bayesian clustering analyses revealed a clear pattern of genetic structure, with one genetic cluster spanning a large area in Western Europe and two other genetic clusters with a more limited distribution range in Eastern Europe, one around the Carpathian Mountains and the other restricted to the Balkan Peninsula. Approximate Bayesian computation appeared to be a powerful technique for inferring the history of these clusters, supporting a scenario of simultaneous differentiation of three separate glacial refugia. Admixture between these three populations was found in their suture zones. A weak isolation by distance pattern was detected within each population, indicating a high extent of historical gene flow for the European crabapple. © 2013 Blackwell Publishing Ltd.

  5. Can Computational Sediment Transport Models Reproduce the Observed Variability of Channel Networks in Modern Deltas?

    NASA Astrophysics Data System (ADS)

    Nesvold, E.; Mukerji, T.

    2017-12-01

    River deltas display complex channel networks that can be characterized through the framework of graph theory, as shown by Tejedor et al. (2015). Deltaic patterns may also be useful in a Bayesian approach to uncertainty quantification of the subsurface, but this requires a prior distribution of the networks of ancient deltas. By considering subaerial deltas, one can at least obtain a snapshot in time of the channel network spectrum across deltas. In this study, the directed graph structure is semi-automatically extracted from satellite imagery using techniques from statistical processing and machine learning. Once the network is labeled with vertices and edges, spatial trends and width and sinuosity distributions can also be found easily. Since imagery is inherently 2D, computational sediment transport models can serve as a link between 2D network structure and 3D depositional elements; the numerous empirical rules and parameters built into such models makes it necessary to validate the output with field data. For this purpose we have used a set of 110 modern deltas, with average water discharge ranging from 10 - 200,000 m3/s, as a benchmark for natural variability. Both graph theoretic and more general distributions are established. A key question is whether it is possible to reproduce this deltaic network spectrum with computational models. Delft3D was used to solve the shallow water equations coupled with sediment transport. The experimental setup was relatively simple; incoming channelized flow onto a tilted plane, with varying wave and tidal energy, sediment types and grain size distributions, river discharge and a few other input parameters. Each realization was run until a delta had fully developed: between 50 and 500 years (with a morphology acceleration factor). It is shown that input parameters should not be sampled independently from the natural ranges, since this may result in deltaic output that falls well outside the natural spectrum. Since we are interested in studying the patterns occurring in nature, ideas are proposed for how to sample computer realizations that match this distribution. By establishing a link between surface based patterns from the field with the associated subsurface structure from physics-based models, this is a step towards a fully Bayesian workflow in subsurface simulation.

  6. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    PubMed Central

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  7. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  8. Graph Theory and Ion and Molecular Aggregation in Aqueous Solutions.

    PubMed

    Choi, Jun-Ho; Lee, Hochan; Choi, Hyung Ran; Cho, Minhaeng

    2018-04-20

    In molecular and cellular biology, dissolved ions and molecules have decisive effects on chemical and biological reactions, conformational stabilities, and functions of small to large biomolecules. Despite major efforts, the current state of understanding of the effects of specific ions, osmolytes, and bioprotecting sugars on the structure and dynamics of water H-bonding networks and proteins is not yet satisfactory. Recently, to gain deeper insight into this subject, we studied various aggregation processes of ions and molecules in high-concentration salt, osmolyte, and sugar solutions with time-resolved vibrational spectroscopy and molecular dynamics simulation methods. It turns out that ions (or solute molecules) have a strong propensity to self-assemble into large and polydisperse aggregates that affect both local and long-range water H-bonding structures. In particular, we have shown that graph-theoretical approaches can be used to elucidate morphological characteristics of large aggregates in various aqueous salt, osmolyte, and sugar solutions. When ion and molecular aggregates in such aqueous solutions are treated as graphs, a variety of graph-theoretical properties, such as graph spectrum, degree distribution, clustering coefficient, minimum path length, and graph entropy, can be directly calculated by considering an ensemble of configurations taken from molecular dynamics trajectories. Here we show percolating behavior exhibited by ion and molecular aggregates upon increase in solute concentration in high solute concentrations and discuss compelling evidence of the isomorphic relation between percolation transitions of ion and molecular aggregates and water H-bonding networks. We anticipate that the combination of graph theory and molecular dynamics simulation methods will be of exceptional use in achieving a deeper understanding of the fundamental physical chemistry of dissolution and in describing the interplay between the self-aggregation of solute molecules and the structure and dynamics of water.

  9. Graph Theory and Ion and Molecular Aggregation in Aqueous Solutions

    NASA Astrophysics Data System (ADS)

    Choi, Jun-Ho; Lee, Hochan; Choi, Hyung Ran; Cho, Minhaeng

    2018-04-01

    In molecular and cellular biology, dissolved ions and molecules have decisive effects on chemical and biological reactions, conformational stabilities, and functions of small to large biomolecules. Despite major efforts, the current state of understanding of the effects of specific ions, osmolytes, and bioprotecting sugars on the structure and dynamics of water H-bonding networks and proteins is not yet satisfactory. Recently, to gain deeper insight into this subject, we studied various aggregation processes of ions and molecules in high-concentration salt, osmolyte, and sugar solutions with time-resolved vibrational spectroscopy and molecular dynamics simulation methods. It turns out that ions (or solute molecules) have a strong propensity to self-assemble into large and polydisperse aggregates that affect both local and long-range water H-bonding structures. In particular, we have shown that graph-theoretical approaches can be used to elucidate morphological characteristics of large aggregates in various aqueous salt, osmolyte, and sugar solutions. When ion and molecular aggregates in such aqueous solutions are treated as graphs, a variety of graph-theoretical properties, such as graph spectrum, degree distribution, clustering coefficient, minimum path length, and graph entropy, can be directly calculated by considering an ensemble of configurations taken from molecular dynamics trajectories. Here we show percolating behavior exhibited by ion and molecular aggregates upon increase in solute concentration in high solute concentrations and discuss compelling evidence of the isomorphic relation between percolation transitions of ion and molecular aggregates and water H-bonding networks. We anticipate that the combination of graph theory and molecular dynamics simulation methods will be of exceptional use in achieving a deeper understanding of the fundamental physical chemistry of dissolution and in describing the interplay between the self-aggregation of solute molecules and the structure and dynamics of water.

  10. Bipartite flocking for multi-agent systems

    NASA Astrophysics Data System (ADS)

    Fan, Ming-Can; Zhang, Hai-Tao; Wang, Miaomiao

    2014-09-01

    This paper addresses the bipartite flock control problem where a multi-agent system splits into two clusters upon internal or external excitations. Using structurally balanced signed graph theory, LaSalle's invariance principle and Barbalat's Lemma, we prove that the proposed algorithm guarantees a bipartite flocking behavior. In each of the two disjoint clusters, all individuals move with the same direction. Meanwhile, every pair of agents in different clusters moves with opposite directions. Moreover, all agents in the two separated clusters approach a common velocity magnitude, and collision avoidance among all agents is ensured as well. Finally, the proposed bipartite flock control method is examined by numerical simulations. The bipartite flocking motion addressed by this paper has its references in both natural collective motions and human group behaviors such as predator-prey and panic escaping scenarios.

  11. Unsupervised classification of multivariate geostatistical data: Two algorithms

    NASA Astrophysics Data System (ADS)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  12. Community detection in complex networks using proximate support vector clustering

    NASA Astrophysics Data System (ADS)

    Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing

    2018-03-01

    Community structure, one of the most attention attracting properties in complex networks, has been a cornerstone in advances of various scientific branches. A number of tools have been involved in recent studies concentrating on the community detection algorithms. In this paper, we propose a support vector clustering method based on a proximity graph, owing to which the introduced algorithm surpasses the traditional support vector approach both in accuracy and complexity. Results of extensive experiments undertaken on computer generated networks and real world data sets illustrate competent performances in comparison with the other counterparts.

  13. Conjunctive Conceptual Clustering: A Methodology and Experimentation.

    DTIC Science & Technology

    1987-09-01

    observing a typical restaurant table on vhich there are such objects as food on a plate, a salad, utensils, salt and pepper, napkins , a ase with flowers, a...colored graph has nodes and inks that match only if they have corre-ponding link-olor and node-color labelg 4w 80 [SEtexture sa lif ba p S M i e d If...LINK LINK LINK LINK LINK 9 0 1 OPENdRECT RECTLOD 1 2 CL 10 0 0 LINK LINK INK LINK LINK ,~ . 0.5. Input file for attribute-based clustering The

  14. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  15. On star formation in stellar systems. I - Photoionization effects in protoglobular clusters

    NASA Technical Reports Server (NTRS)

    Tenorio-Tagle, G.; Bodenheimer, P.; Lin, D. N. C.; Noriega-Crespo, A.

    1986-01-01

    The progressive ionization and subsequent dynamical evolution of nonhomogeneously distributed low-metal-abundance diffuse gas after star formation in globular clusters are investigated analytically, taking the gravitational acceleration due to the stars into account. The basic equations are derived; the underlying assumptions, input parameters, and solution methods are explained; and numerical results for three standard cases (ionization during star formation, ionization during expansion, and evolution resulting in a stable H II region at its equilibrium Stromgren radius) are presented in graphs and characterized in detail. The time scale of residual-gas loss in typical clusters is found to be about the same as the lifetime of a massive star on the main sequence.

  16. Adaptation of pancreatic islet cyto-architecture during development

    NASA Astrophysics Data System (ADS)

    Striegel, Deborah A.; Hara, Manami; Periwal, Vipul

    2016-04-01

    Plasma glucose in mammals is regulated by hormones secreted by the islets of Langerhans embedded in the exocrine pancreas. Islets consist of endocrine cells, primarily α, β, and δ cells, which secrete glucagon, insulin, and somatostatin, respectively. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Varying demands and available nutrients during development produce changes in the local connectivity of β cells in an islet. We showed in earlier work that graph theory provides a framework for the quantification of the seemingly stochastic cyto-architecture of β cells in an islet. To quantify the dynamics of endocrine connectivity during development requires a framework for characterizing changes in the probability distribution on the space of possible graphs, essentially a Fokker-Planck formalism on graphs. With large-scale imaging data for hundreds of thousands of islets containing millions of cells from human specimens, we show that this dynamics can be determined quantitatively. Requiring that rearrangement and cell addition processes match the observed dynamic developmental changes in quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that there is a transient shift in preferred connectivity for β cells between 1-35 weeks and 12-24 months.

  17. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

    DOE PAGES

    Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...

    2016-09-21

    In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less

  18. What can graph theory tell us about word learning and lexical retrieval?

    PubMed

    Vitevitch, Michael S

    2008-04-01

    Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing.

  19. Spatio-Temporal History of HIV-1 CRF35_AD in Afghanistan and Iran.

    PubMed

    Eybpoosh, Sana; Bahrampour, Abbas; Karamouzian, Mohammad; Azadmanesh, Kayhan; Jahanbakhsh, Fatemeh; Mostafavi, Ehsan; Zolala, Farzaneh; Haghdoost, Ali Akbar

    2016-01-01

    HIV-1 Circulating Recombinant Form 35_AD (CRF35_AD) has an important position in the epidemiological profile of Afghanistan and Iran. Despite the presence of this clade in Afghanistan and Iran for over a decade, our understanding of its origin and dissemination patterns is limited. In this study, we performed a Bayesian phylogeographic analysis to reconstruct the spatio-temporal dispersion pattern of this clade using eligible CRF35_AD gag and pol sequences available in the Los Alamos HIV database (432 sequences available from Iran, 16 sequences available from Afghanistan, and a single CRF35_AD-like pol sequence available from USA). Bayesian Markov Chain Monte Carlo algorithm was implemented in BEAST v1.8.1. Between-country dispersion rates were tested with Bayesian stochastic search variable selection method and were considered significant where Bayes factor values were greater than three. The findings suggested that CRF35_AD sequences were genetically similar to parental sequences from Kenya and Uganda, and to a set of subtype A1 sequences available from Afghan refugees living in Pakistan. Our results also showed that across all phylogenies, Afghan and Iranian CRF35_AD sequences formed a monophyletic cluster (posterior clade credibility> 0.7). The divergence date of this cluster was estimated to be between 1990 and 1992. Within this cluster, a bidirectional dispersion of the virus was observed across Afghanistan and Iran. We could not clearly identify if Afghanistan or Iran first established or received this epidemic, as the root location of this cluster could not be robustly estimated. Three CRF35_AD sequences from Afghan refugees living in Pakistan nested among Afghan and Iranian CRF35_AD branches. However, the CRF35_AD-like sequence available from USA diverged independently from Kenyan subtype A1 sequences, suggesting it not to be a true CRF35_AD lineage. Potential factors contributing to viral exchange between Afghanistan and Iran could be injection drug networks and mass migration of Afghan refugees and labours to Iran, which calls for extensive preventive efforts.

  20. Spatio-Temporal History of HIV-1 CRF35_AD in Afghanistan and Iran

    PubMed Central

    Eybpoosh, Sana; Bahrampour, Abbas; Karamouzian, Mohammad; Azadmanesh, Kayhan; Jahanbakhsh, Fatemeh; Mostafavi, Ehsan; Zolala, Farzaneh; Haghdoost, Ali Akbar

    2016-01-01

    HIV-1 Circulating Recombinant Form 35_AD (CRF35_AD) has an important position in the epidemiological profile of Afghanistan and Iran. Despite the presence of this clade in Afghanistan and Iran for over a decade, our understanding of its origin and dissemination patterns is limited. In this study, we performed a Bayesian phylogeographic analysis to reconstruct the spatio-temporal dispersion pattern of this clade using eligible CRF35_AD gag and pol sequences available in the Los Alamos HIV database (432 sequences available from Iran, 16 sequences available from Afghanistan, and a single CRF35_AD-like pol sequence available from USA). Bayesian Markov Chain Monte Carlo algorithm was implemented in BEAST v1.8.1. Between-country dispersion rates were tested with Bayesian stochastic search variable selection method and were considered significant where Bayes factor values were greater than three. The findings suggested that CRF35_AD sequences were genetically similar to parental sequences from Kenya and Uganda, and to a set of subtype A1 sequences available from Afghan refugees living in Pakistan. Our results also showed that across all phylogenies, Afghan and Iranian CRF35_AD sequences formed a monophyletic cluster (posterior clade credibility> 0.7). The divergence date of this cluster was estimated to be between 1990 and 1992. Within this cluster, a bidirectional dispersion of the virus was observed across Afghanistan and Iran. We could not clearly identify if Afghanistan or Iran first established or received this epidemic, as the root location of this cluster could not be robustly estimated. Three CRF35_AD sequences from Afghan refugees living in Pakistan nested among Afghan and Iranian CRF35_AD branches. However, the CRF35_AD-like sequence available from USA diverged independently from Kenyan subtype A1 sequences, suggesting it not to be a true CRF35_AD lineage. Potential factors contributing to viral exchange between Afghanistan and Iran could be injection drug networks and mass migration of Afghan refugees and labours to Iran, which calls for extensive preventive efforts. PMID:27280293

  1. Eigenanalysis and Graph Theory Combined to Determine the Seasonal and Solar-Cycle Variations of Polar Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Shore, R. M.; Freeman, M. P.; Gjerloev, J. W.

    2017-12-01

    We apply the meteorological analysis method of Empirical Orthogonal Functions (EOF) to ground magnetometer measurements, and subsequently use graph theory to classify the results. The EOF method is used to characterise and separate contributions to the variability of the Earth's external magnetic field (EMF) in the northern polar region. EOFs decompose the noisy EMF data into a small number of independent spatio-temporal basis functions, which collectively describe the majority of the magnetic field variance. We use these basis functions (computed monthly) to infill where data are missing, providing a self-consistent description of the EMF at 5-minute resolution spanning 1997-2009 (solar cycle 23). The EOF basis functions are calculated independently for each of the 144 months (i.e. 1997-2009) analysed. Since (by definition) the basis vectors are ranked by their contribution to the total variance, their rank will change from month to month. We use graph theory to find clusters of quantifiably-similar spatial basis functions, and thereby track similar patterns throughout the span of 144 months. We find that the discovered clusters can be associated with well-known individual Disturbance Polar (DP)-type equivalent current systems (e.g. DP2, DP1, DPY, NBZ), or with the motion of these systems. Via this method, we thus describe the varying behaviour of these current systems over solar cycle 23. We present their seasonal and solar cycle variations and examine the response of each current system to solar wind driving.

  2. Invasion Percolation and Global Optimization

    NASA Astrophysics Data System (ADS)

    Barabási, Albert-László

    1996-05-01

    Invasion bond percolation (IBP) is mapped exactly into Prim's algorithm for finding the shortest spanning tree of a weighted random graph. Exploring this mapping, which is valid for arbitrary dimensions and lattices, we introduce a new IBP model that belongs to the same universality class as IBP and generates the minimal energy tree spanning the IBP cluster.

  3. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  4. GeoSciGraph: An Ontological Framework for EarthCube Semantic Infrastructure

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Schachne, A.; Condit, C.; Valentine, D.; Richard, S.; Zaslavsky, I.

    2015-12-01

    The CINERGI (Community Inventory of EarthCube Resources for Geosciences Interoperability) project compiles an inventory of a wide variety of earth science resources including documents, catalogs, vocabularies, data models, data services, process models, information repositories, domain-specific ontologies etc. developed by research groups and data practitioners. We have developed a multidisciplinary semantic framework called GeoSciGraph semantic ingration of earth science resources. An integrated ontology is constructed with Basic Formal Ontology (BFO) as its upper ontology and currently ingests multiple component ontologies including the SWEET ontology, GeoSciML's lithology ontology, Tematres controlled vocabulary server, GeoNames, GCMD vocabularies on equipment, platforms and institutions, software ontology, CUAHSI hydrology vocabulary, the environmental ontology (ENVO) and several more. These ontologies are connected through bridging axioms; GeoSciGraph identifies lexically close terms and creates equivalence class or subclass relationships between them after human verification. GeoSciGraph allows a community to create community-specific customizations of the integrated ontology. GeoSciGraph uses the Neo4J,a graph database that can hold several billion concepts and relationships. GeoSciGraph provides a number of REST services that can be called by other software modules like the CINERGI information augmentation pipeline. 1) Vocabulary services are used to find exact and approximate terms, term categories (community-provided clusters of terms e.g., measurement-related terms or environmental material related terms), synonyms, term definitions and annotations. 2) Lexical services are used for text parsing to find entities, which can then be included into the ontology by a domain expert. 3) Graph services provide the ability to perform traversal centric operations e.g., finding paths and neighborhoods which can be used to perform ontological operations like computing transitive closure (e.g., finding all subclasses of rocks). 4) Annotation services are used to adorn an arbitrary block of text (e.g., from a NOAA catalog record) with ontology terms. The system has been used to ontologically integrate diverse sources like Science-base, NOAA records, PETDB.

  5. Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.

    PubMed

    Chen, Ying-Chih; Wen, Chih-Yu

    2012-11-08

    This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.

  6. A phase cell cluster expansion for Euclidean field theories

    NASA Astrophysics Data System (ADS)

    Battle, Guy A., III; Federbush, Paul

    1982-08-01

    We adapt the cluster expansion first used to treat infrared problems for lattice models (a mass zero cluster expansion) to the usual field theory situation. The field is expanded in terms of special block spin functions and the cluster expansion given in terms of the expansion coefficients (phase cell variables); the cluster expansion expresses correlation functions in terms of contributions from finite coupled subsets of these variables. Most of the present work is carried through in d space time dimensions (for φ24 the details of the cluster expansion are pursued and convergence is proven). Thus most of the results in the present work will apply to a treatment of φ34 to which we hope to return in a succeeding paper. Of particular interest in this paper is a substitute for the stability of the vacuum bound appropriate to this cluster expansion (for d = 2 and d = 3), and a new method for performing estimates with tree graphs. The phase cell cluster expansions have the renormalization group incorporated intimately into their structure. We hope they will be useful ultimately in treating four dimensional field theories.

  7. A SPECTRAL GRAPH APPROACH TO DISCOVERING GENETIC ANCESTRY1

    PubMed Central

    Lee, Ann B.; Luca, Diana; Roeder, Kathryn

    2010-01-01

    Mapping human genetic variation is fundamentally interesting in fields such as anthropology and forensic inference. At the same time, patterns of genetic diversity confound efforts to determine the genetic basis of complex disease. Due to technological advances, it is now possible to measure hundreds of thousands of genetic variants per individual across the genome. Principal component analysis (PCA) is routinely used to summarize the genetic similarity between subjects. The eigenvectors are interpreted as dimensions of ancestry. We build on this idea using a spectral graph approach. In the process we draw on connections between multidimensional scaling and spectral kernel methods. Our approach, based on a spectral embedding derived from the normalized Laplacian of a graph, can produce more meaningful delineation of ancestry than by using PCA. The method is stable to outliers and can more easily incorporate different similarity measures of genetic data than PCA. We illustrate a new algorithm for genetic clustering and association analysis on a large, genetically heterogeneous sample. PMID:20689656

  8. The Mouse Cortical Connectome, Characterized by an Ultra-Dense Cortical Graph, Maintains Specificity by Distinct Connectivity Profiles.

    PubMed

    Gămănuţ, Răzvan; Kennedy, Henry; Toroczkai, Zoltán; Ercsey-Ravasz, Mária; Van Essen, David C; Knoblauch, Kenneth; Burkhalter, Andreas

    2018-02-07

    The inter-areal wiring pattern of the mouse cerebral cortex was analyzed in relation to a refined parcellation of cortical areas. Twenty-seven retrograde tracer injections were made in 19 areas of a 47-area parcellation of the mouse neocortex. Flat mounts of the cortex and multiple histological markers enabled detailed counts of labeled neurons in individual areas. The observed log-normal distribution of connection weights to each cortical area spans 5 orders of magnitude and reveals a distinct connectivity profile for each area, analogous to that observed in macaques. The cortical network has a density of 97%, considerably higher than the 66% density reported in macaques. A weighted graph analysis reveals a similar global efficiency but weaker spatial clustering compared with that reported in macaques. The consistency, precision of the connectivity profile, density, and weighted graph analysis of the present data differ significantly from those obtained in earlier studies in the mouse. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Continuum Limit of Total Variation on Point Clouds

    NASA Astrophysics Data System (ADS)

    García Trillos, Nicolás; Slepčev, Dejan

    2016-04-01

    We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.

  10. Analysis of spontaneous EEG activity in Alzheimer's disease using cross-sample entropy and graph theory.

    PubMed

    Gomez, Carlos; Poza, Jesus; Gomez-Pilar, Javier; Bachiller, Alejandro; Juan-Cruz, Celia; Tola-Arribas, Miguel A; Carreres, Alicia; Cano, Monica; Hornero, Roberto

    2016-08-01

    The aim of this pilot study was to analyze spontaneous electroencephalography (EEG) activity in Alzheimer's disease (AD) by means of Cross-Sample Entropy (Cross-SampEn) and two local measures derived from graph theory: clustering coefficient (CC) and characteristic path length (PL). Five minutes of EEG activity were recorded from 37 patients with dementia due to AD and 29 elderly controls. Our results showed that Cross-SampEn values were lower in the AD group than in the control one for all the interactions among EEG channels. This finding indicates that EEG activity in AD is characterized by a lower statistical dissimilarity among channels. Significant differences were found mainly for fronto-central interactions (p <; 0.01, permutation test). Additionally, the application of graph theory measures revealed diverse neural network changes, i.e. lower CC and higher PL values in AD group, leading to a less efficient brain organization. This study suggests the usefulness of our approach to provide further insights into the underlying brain dynamics associated with AD.

  11. Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification

    NASA Astrophysics Data System (ADS)

    Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang

    2016-07-01

    This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.

  12. On characterizing population commonalities and subject variations in brain networks.

    PubMed

    Ghanbari, Yasser; Bloy, Luke; Tunc, Birkan; Shankar, Varsha; Roberts, Timothy P L; Edgar, J Christopher; Schultz, Robert T; Verma, Ragini

    2017-05-01

    Brain networks based on resting state connectivity as well as inter-regional anatomical pathways obtained using diffusion imaging have provided insight into pathology and development. Such work has underscored the need for methods that can extract sub-networks that can accurately capture the connectivity patterns of the underlying population while simultaneously describing the variation of sub-networks at the subject level. We have designed a multi-layer graph clustering method that extracts clusters of nodes, called 'network hubs', which display higher levels of connectivity within the cluster than to the rest of the brain. The method determines an atlas of network hubs that describes the population, as well as weights that characterize subject-wise variation in terms of within- and between-hub connectivity. This lowers the dimensionality of brain networks, thereby providing a representation amenable to statistical analyses. The applicability of the proposed technique is demonstrated by extracting an atlas of network hubs for a population of typically developing controls (TDCs) as well as children with autism spectrum disorder (ASD), and using the structural and functional networks of a population to determine the subject-level variation of these hubs and their inter-connectivity. These hubs are then used to compare ASD and TDCs. Our method is generalizable to any population whose connectivity (structural or functional) can be captured via non-negative network graphs. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  14. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  15. Inferring the Growth of Massive Galaxies Using Bayesian Spectral Synthesis Modeling

    NASA Astrophysics Data System (ADS)

    Stillman, Coley Michael; Poremba, Megan R.; Moustakas, John

    2018-01-01

    The most massive galaxies in the universe are typically found at the centers of massive galaxy clusters. Studying these galaxies can provide valuable insight into the hierarchical growth of massive dark matter halos. One of the key challenges of measuring the stellar mass growth of massive galaxies is converting the measured light profiles into stellar mass. We use Prospector, a state-of-the-art Bayesian spectral synthesis modeling code, to infer the total stellar masses of a pilot sample of massive central galaxies selected from the Sloan Digital Sky Survey. We compare our stellar mass estimates to previous measurements, and present some of the quantitative diagnostics provided by Prospector.

  16. Using Zipf-Mandelbrot law and graph theory to evaluate animal welfare

    NASA Astrophysics Data System (ADS)

    de Oliveira, Caprice G. L.; Miranda, José G. V.; Japyassú, Hilton F.; El-Hani, Charbel N.

    2018-02-01

    This work deals with the construction and testing of metrics of welfare based on behavioral complexity, using assumptions derived from Zipf-Mandelbrot law and graph theory. To test these metrics we compared yellow-breasted capuchins (Sapajus xanthosternos) (Wied-Neuwied, 1826) (PRIMATES CEBIDAE) found in two institutions, subjected to different captive conditions: a Zoobotanical Garden (hereafter, ZOO; n = 14), in good welfare condition, and a Wildlife Rescue Center (hereafter, WRC; n = 8), in poor welfare condition. In the Zipf-Mandelbrot-based analysis, the power law exponent was calculated using behavior frequency values versus behavior rank value. These values allow us to evaluate variations in individual behavioral complexity. For each individual we also constructed a graph using the sequence of behavioral units displayed in each recording (average recording time per individual: 4 h 26 min in the ZOO, 4 h 30 min in the WRC). Then, we calculated the values of the main graph attributes, which allowed us to analyze the complexity of the connectivity of the behaviors displayed in the individuals' behavioral sequences. We found significant differences between the two groups for the slope values in the Zipf-Mandelbrot analysis. The slope values for the ZOO individuals approached -1, with graphs representing a power law, while the values for the WRC individuals diverged from -1, differing from a power law pattern. Likewise, we found significant differences for the graph attributes average degree, weighted average degree, and clustering coefficient when comparing the ZOO and WRC individual graphs. However, no significant difference was found for the attributes modularity and average path length. Both analyses were effective in detecting differences between the patterns of behavioral complexity in the two groups. The slope values for the ZOO individuals indicated a higher behavioral complexity when compared to the WRC individuals. Similarly, graph construction and the calculation of its attributes values allowed us to show that the complexity of the connectivity among the behaviors was higher in the ZOO than in the WRC individual graphs. These results show that the two measuring approaches introduced and tested in this paper were capable of capturing the differences in welfare levels between the two conditions, as shown by differences in behavioral complexity.

  17. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  18. Discriminating micropathogen lineages and their reticulate evolution through graph theory-based network analysis: the case of Trypanosoma cruzi, the agent of Chagas disease.

    PubMed

    Arnaud-Haond, Sophie; Moalic, Yann; Barnabé, Christian; Ayala, Francisco José; Tibayrenc, Michel

    2014-01-01

    Micropathogens (viruses, bacteria, fungi, parasitic protozoa) share a common trait, which is partial clonality, with wide variance in the respective influence of clonality and sexual recombination on the dynamics and evolution of taxa. The discrimination of distinct lineages and the reconstruction of their phylogenetic history are key information to infer their biomedical properties. However, the phylogenetic picture is often clouded by occasional events of recombination across divergent lineages, limiting the relevance of classical phylogenetic analysis and dichotomic trees. We have applied a network analysis based on graph theory to illustrate the relationships among genotypes of Trypanosoma cruzi, the parasitic protozoan responsible for Chagas disease, to identify major lineages and to unravel their past history of divergence and possible recombination events. At the scale of T. cruzi subspecific diversity, graph theory-based networks applied to 22 isoenzyme loci (262 distinct Multi-Locus-Enzyme-Electrophoresis -MLEE) and 19 microsatellite loci (66 Multi-Locus-Genotypes -MLG) fully confirms the high clustering of genotypes into major lineages or "near-clades". The release of the dichotomic constraint associated with phylogenetic reconstruction usually applied to Multilocus data allows identifying putative hybrids and their parental lineages. Reticulate topology suggests a slightly different history for some of the main "near-clades", and a possibly more complex origin for the putative hybrids than hitherto proposed. Finally the sub-network of the near-clade T. cruzi I (28 MLG) shows a clustering subdivision into three differentiated lesser near-clades ("Russian doll pattern"), which confirms the hypothesis recently proposed by other investigators. The present study broadens and clarifies the hypotheses previously obtained from classical markers on the same sets of data, which demonstrates the added value of this approach. This underlines the potential of graph theory-based network analysis for describing the nature and relationships of major pathogens, thereby opening stimulating prospects to unravel the organization, dynamics and history of major micropathogen lineages.

  19. Determining the Optimal Number of Clusters with the Clustergram

    NASA Technical Reports Server (NTRS)

    Fluegemann, Joseph K.; Davies, Misty D.; Aguirre, Nathan D.

    2011-01-01

    Cluster analysis aids research in many different fields, from business to biology to aerospace. It consists of using statistical techniques to group objects in large sets of data into meaningful classes. However, this process of ordering data points presents much uncertainty because it involves several steps, many of which are subject to researcher judgment as well as inconsistencies depending on the specific data type and research goals. These steps include the method used to cluster the data, the variables on which the cluster analysis will be operating, the number of resulting clusters, and parts of the interpretation process. In most cases, the number of clusters must be guessed or estimated before employing the clustering method. Many remedies have been proposed, but none is unassailable and certainly not for all data types. Thus, the aim of current research for better techniques of determining the number of clusters is generally confined to demonstrating that the new technique excels other methods in performance for several disparate data types. Our research makes use of a new cluster-number-determination technique based on the clustergram: a graph that shows how the number of objects in the cluster and the cluster mean (the ordinate) change with the number of clusters (the abscissa). We use the features of the clustergram to make the best determination of the cluster-number.

  20. Guidelines for a graph-theoretic implementation of structural equation modeling

    USGS Publications Warehouse

    Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William

    2012-01-01

    Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.

  1. Bayesian network learning for natural hazard assessments

    NASA Astrophysics Data System (ADS)

    Vogel, Kristin

    2016-04-01

    Even though quite different in occurrence and consequences, from a modelling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding. On top of the uncertainty about the modelling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Thus, for reliable natural hazard assessments it is crucial not only to capture and quantify involved uncertainties, but also to express and communicate uncertainties in an intuitive way. Decision-makers, who often find it difficult to deal with uncertainties, might otherwise return to familiar (mostly deterministic) proceedings. In the scope of the DFG research training group „NatRiskChange" we apply the probabilistic framework of Bayesian networks for diverse natural hazard and vulnerability studies. The great potential of Bayesian networks was already shown in previous natural hazard assessments. Treating each model component as random variable, Bayesian networks aim at capturing the joint distribution of all considered variables. Hence, each conditional distribution of interest (e.g. the effect of precautionary measures on damage reduction) can be inferred. The (in-)dependencies between the considered variables can be learned purely data driven or be given by experts. Even a combination of both is possible. By translating the (in-)dependences into a graph structure, Bayesian networks provide direct insights into the workings of the system and allow to learn about the underlying processes. Besides numerous studies on the topic, learning Bayesian networks from real-world data remains challenging. In previous studies, e.g. on earthquake induced ground motion and flood damage assessments, we tackled the problems arising with continuous variables and incomplete observations. Further studies rise the challenge of relying on very small data sets. Since parameter estimates for complex models based on few observations are unreliable, it is necessary to focus on simplified, yet still meaningful models. A so called Markov Blanket approach is developed to identify the most relevant model components and to construct a simple Bayesian network based on those findings. Since the proceeding is completely data driven, it can easily be transferred to various applications in natural hazard domains. This study is funded by the Deutsche Forschungsgemeinschaft (DFG) within the research training programme GRK 2043/1 "NatRiskChange - Natural hazards and risks in a changing world" at Potsdam University.

  2. Assessing Genetic Structure in Common but Ecologically Distinct Carnivores: The Stone Marten and Red Fox.

    PubMed

    Basto, Mafalda P; Santos-Reis, Margarida; Simões, Luciana; Grilo, Clara; Cardoso, Luís; Cortes, Helder; Bruford, Michael W; Fernandes, Carlos

    2016-01-01

    The identification of populations and spatial genetic patterns is important for ecological and conservation research, and spatially explicit individual-based methods have been recognised as powerful tools in this context. Mammalian carnivores are intrinsically vulnerable to habitat fragmentation but not much is known about the genetic consequences of fragmentation in common species. Stone martens (Martes foina) and red foxes (Vulpes vulpes) share a widespread Palearctic distribution and are considered habitat generalists, but in the Iberian Peninsula stone martens tend to occur in higher quality habitats. We compared their genetic structure in Portugal to see if they are consistent with their differences in ecological plasticity, and also to illustrate an approach to explicitly delineate the spatial boundaries of consistently identified genetic units. We analysed microsatellite data using spatial Bayesian clustering methods (implemented in the software BAPS, GENELAND and TESS), a progressive partitioning approach and a multivariate technique (Spatial Principal Components Analysis-sPCA). Three consensus Bayesian clusters were identified for the stone marten. No consensus was achieved for the red fox, but one cluster was the most probable clustering solution. Progressive partitioning and sPCA suggested additional clusters in the stone marten but they were not consistent among methods and were geographically incoherent. The contrasting results between the two species are consistent with the literature reporting stricter ecological requirements of the stone marten in the Iberian Peninsula. The observed genetic structure in the stone marten may have been influenced by landscape features, particularly rivers, and fragmentation. We suggest that an approach based on a consensus clustering solution of multiple different algorithms may provide an objective and effective means to delineate potential boundaries of inferred subpopulations. sPCA and progressive partitioning offer further verification of possible population structure and may be useful for revealing cryptic spatial genetic patterns worth further investigation.

  3. Genome-wide SNP discovery and population structure analysis in pepper (Capsicum annuum) using genotyping by sequencing.

    PubMed

    Taranto, F; D'Agostino, N; Greco, B; Cardi, T; Tripodi, P

    2016-11-21

    Knowledge on population structure and genetic diversity in vegetable crops is essential for association mapping studies and genomic selection. Genotyping by sequencing (GBS) represents an innovative method for large scale SNP detection and genotyping of genetic resources. Herein we used the GBS approach for the genome-wide identification of SNPs in a collection of Capsicum spp. accessions and for the assessment of the level of genetic diversity in a subset of 222 cultivated pepper (Capsicum annum) genotypes. GBS analysis generated a total of 7,568,894 master tags, of which 43.4% uniquely aligned to the reference genome CM334. A total of 108,591 SNP markers were identified, of which 105,184 were in C. annuum accessions. In order to explore the genetic diversity of C. annuum and to select a minimal core set representing most of the total genetic variation with minimum redundancy, a subset of 222 C. annuum accessions were analysed using 32,950 high quality SNPs. Based on Bayesian and Hierarchical clustering it was possible to divide the collection into three clusters. Cluster I had the majority of varieties and landraces mainly from Southern and Northern Italy, and from Eastern Europe, whereas clusters II and III comprised accessions of different geographical origins. Considering the genome-wide genetic variation among the accessions included in cluster I, a second round of Bayesian (K = 3) and Hierarchical (K = 2) clustering was performed. These analysis showed that genotypes were grouped not only based on geographical origin, but also on fruit-related features. GBS data has proven useful to assess the genetic diversity in a collection of C. annuum accessions. The high number of SNP markers, uniformly distributed on the 12 chromosomes, allowed the accessions to be distinguished according to geographical origin and fruit-related features. SNP markers and information on population structure developed in this study will undoubtedly support genome-wide association mapping studies and marker-assisted selection programs.

  4. Assessing Genetic Structure in Common but Ecologically Distinct Carnivores: The Stone Marten and Red Fox

    PubMed Central

    Basto, Mafalda P.; Santos-Reis, Margarida; Simões, Luciana; Grilo, Clara; Cardoso, Luís; Cortes, Helder; Bruford, Michael W.; Fernandes, Carlos

    2016-01-01

    The identification of populations and spatial genetic patterns is important for ecological and conservation research, and spatially explicit individual-based methods have been recognised as powerful tools in this context. Mammalian carnivores are intrinsically vulnerable to habitat fragmentation but not much is known about the genetic consequences of fragmentation in common species. Stone martens (Martes foina) and red foxes (Vulpes vulpes) share a widespread Palearctic distribution and are considered habitat generalists, but in the Iberian Peninsula stone martens tend to occur in higher quality habitats. We compared their genetic structure in Portugal to see if they are consistent with their differences in ecological plasticity, and also to illustrate an approach to explicitly delineate the spatial boundaries of consistently identified genetic units. We analysed microsatellite data using spatial Bayesian clustering methods (implemented in the software BAPS, GENELAND and TESS), a progressive partitioning approach and a multivariate technique (Spatial Principal Components Analysis-sPCA). Three consensus Bayesian clusters were identified for the stone marten. No consensus was achieved for the red fox, but one cluster was the most probable clustering solution. Progressive partitioning and sPCA suggested additional clusters in the stone marten but they were not consistent among methods and were geographically incoherent. The contrasting results between the two species are consistent with the literature reporting stricter ecological requirements of the stone marten in the Iberian Peninsula. The observed genetic structure in the stone marten may have been influenced by landscape features, particularly rivers, and fragmentation. We suggest that an approach based on a consensus clustering solution of multiple different algorithms may provide an objective and effective means to delineate potential boundaries of inferred subpopulations. sPCA and progressive partitioning offer further verification of possible population structure and may be useful for revealing cryptic spatial genetic patterns worth further investigation. PMID:26727497

  5. Bayesian Mass Estimates of the Milky Way: Including Measurement Uncertainties with Hierarchical Bayes

    NASA Astrophysics Data System (ADS)

    Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.

    2017-02-01

    We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}⊙ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}⊙ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.

  6. SLUG - stochastically lighting up galaxies - III. A suite of tools for simulated photometry, spectroscopy, and Bayesian inference with stochastic stellar populations

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Fumagalli, Michele; da Silva, Robert L.; Rendahl, Theodore; Parra, Jonathan

    2015-09-01

    Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also require consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include the following: a greatly-enhanced version of the SLUG code introduced by da Silva et al., which computes spectra and photometry for stochastically or deterministically sampled stellar populations with nearly arbitrary star formation histories, clustering properties, and initial mass functions; CLOUDY_SLUG, a tool that automatically couples SLUG-computed spectra with the CLOUDY radiative transfer code in order to predict stochastic nebular emission; BAYESPHOT, a general-purpose tool for performing Bayesian inference on the physical properties of stellar systems based on unresolved photometry; and CLUSTER_SLUG and SFR_SLUG, a pair of tools that use BAYESPHOT on a library of SLUG models to compute the mass, age, and extinction of mono-age star clusters, and the star formation rate of galaxies, respectively. The latter two tools make use of an extensive library of pre-computed stellar population models, which are included in the software. The complete package is available at http://www.slugsps.com.

  7. A method for topological analysis of high nuclearity coordination clusters and its application to Mn coordination compounds.

    PubMed

    Kostakis, George E; Blatov, Vladislav A; Proserpio, Davide M

    2012-04-21

    A novel method for the topological description of high nuclearity coordination clusters (CCs) was improved and applied to all compounds containing only manganese as a metal center, the data on which are collected in the CCDC (CCDC 5.33 Nov. 2011). Using the TOPOS program package that supports this method, we identified 539 CCs with five or more Mn centers adopting 159 topologically different graphs. In the present database all the Mn CCs are collected and illustrated in such a way that can be searched by cluster topological symbol and nuclearity, compound name and Refcode. The main principles for such an analysis are described herein as well as useful applications of this method.

  8. The second molecular epidemiological study of HIV infection in Mongolia between 2010 and 2016.

    PubMed

    Jagdagsuren, Davaalkham; Hayashida, Tsunefusa; Takano, Misao; Gombo, Erdenetuya; Zayasaikhan, Setsen; Kanayama, Naomi; Tsuchiya, Kiyoto; Oka, Shinichi

    2017-01-01

    Our previous 2005-2009 molecular epidemiological study in Mongolia identified a hot spot of HIV-1 transmission in men who have sex with men (MSM). To control the infection, we collaborated with NGOs to promote safer sex and HIV testing since mid-2010. In this study, we carried out the second molecular epidemiological survey between 2010 and 2016 to determine the status of HIV-1 infection in Mongolia. The study included 143 new cases of HIV-1 infection. Viral RNA was extracted from stocked plasma samples and sequenced for the pol and the env regions using the Sanger method. Near-full length sequencing using MiSeq was performed in 3 patients who were suspected to be infected with recombinant HIV-1. Phylogenetic analysis was performed using the neighbor-joining method and Bayesian Markov chain Monte Carlo method. MSM was the main transmission route in the previous and current studies. However, heterosexual route showed a significant increase in recent years. Phylogenetic analysis documented three taxa; Mongolian B, Korean B, and CRF51_01B, though the former two were also observed in the previous study. CRF51_01B, which originated from Singapore and Malaysia, was confirmed by near-full length sequencing. Although these strains were mainly detected in MSM, they were also found in increasing numbers of heterosexual males and females. Bayesian phylogenetic analysis estimated transmission of CRF51_01B into Mongolia around early 2000s. An extended Bayesian skyline plot showed a rapid increase in the effective population size of Mongolian B cluster around 2004 and that of CRF51_01B cluster around 2011. HIV-1 infection might expand to the general population in Mongolia. Our study documented a new cluster of HIV-1 transmission, enhancing our understanding of the epidemiological status of HIV-1 in Mongolia.

  9. Hierarchical Bayesian modeling of heterogeneous variances in average daily weight gain of commercial feedlot cattle.

    PubMed

    Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M

    2013-06-01

    Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that heterogeneity of variances in ADG is prevalent in feedlot performance and indicate potential sources of heteroskedasticity. Further investigation of factors associated with heteroskedasticity in feedlot performance is warranted to increase consistency and uniformity in commercial beef cattle production and subsequent profitability.

  10. Detecting the influence of ornamental Berberis thunbergii var. atropurpurea in invasive populations of Berberis thunbergii (Berberidaceae) using AFLP1.

    PubMed

    Lubell, Jessica D; Brand, Mark H; Lehrer, Jonathan M; Holsinger, Kent E

    2008-06-01

    Japanese barberry (Berberis thunbergii DC.) is a widespread invasive plant that remains an important landscape shrub represented by ornamental, purple-leaved forms of the botanical variety atropurpurea. These forms differ greatly in appearance from feral plants, bringing into question whether they contribute to invasive populations or whether the invasions represent self-sustaining populations derived from the initial introduction of the species in the late 19th century. In this study we used amplified fragment length polymorphism (AFLP) markers to determine whether genetic contributions from B. t. var. atropurpurea are found within naturalized Japanese barberry populations in southern New England. Bayesian clustering of AFLP genotypes and principal coordinate analysis distinguished B. t. var. atropurpurea genotypes from 85 plants representing five invasive populations. While a single feral plant resembled B. t. var. atropurpurea phenotypically and fell within the same genetic cluster, all other naturalized plants sampled were genetically distinct from the purple-leaved genotypes. Seven plants from two different sites possessed morphology consistent with Berberis vulgaris (common barberry) or B. ×ottawensis (B. thunbergii × B. vulgaris). Genetic analysis placed these plants in two clusters separate from B. thunbergii. Although the Bayesian analysis indicated some introgression of B. t. var. atropurpurea and B. vulgaris, these genotypes have had limited influence on extant feral populations of B. thunbergii.

  11. Signatures of selection in five Italian cattle breeds detected by a 54K SNP panel.

    PubMed

    Mancini, Giordano; Gargani, Maria; Chillemi, Giovanni; Nicolazzi, Ezequiel Luis; Marsan, Paolo Ajmone; Valentini, Alessio; Pariset, Lorraine

    2014-02-01

    In this study we used a medium density panel of SNP markers to perform population genetic analysis in five Italian cattle breeds. The BovineSNP50 BeadChip was used to genotype a total of 2,935 bulls of Piedmontese, Marchigiana, Italian Holstein, Italian Brown and Italian Pezzata Rossa breeds. To determine a genome-wide pattern of positive selection we mapped the F st values against genome location. The highest F st peaks were obtained on BTA6 and BTA13 where some candidate genes are located. We identified selection signatures peculiar of each breed which suggest selection for genes involved in milk or meat traits. The genetic structure was investigated by using a multidimensional scaling of the genetic distance matrix and a Bayesian approach implemented in the STRUCTURE software. The genotyping data showed a clear partitioning of the cattle genetic diversity into distinct breeds if a number of clusters equal to the number of populations were given. Assuming a lower number of clusters beef breeds group together. Both methods showed all five breeds separated in well defined clusters and the Bayesian approach assigned individuals to the breed of origin. The work is of interest not only because it enriches the knowledge on the process of evolution but also because the results generated could have implications for selective breeding programs.

  12. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  13. WebMOTIFS: automated discovery, filtering and scoring of DNA sequence motifs using multiple programs and Bayesian approaches

    PubMed Central

    Romer, Katherine A.; Kayombya, Guy-Richard; Fraenkel, Ernest

    2007-01-01

    WebMOTIFS provides a web interface that facilitates the discovery and analysis of DNA-sequence motifs. Several studies have shown that the accuracy of motif discovery can be significantly improved by using multiple de novo motif discovery programs and using randomized control calculations to identify the most significant motifs or by using Bayesian approaches. WebMOTIFS makes it easy to apply these strategies. Using a single submission form, users can run several motif discovery programs and score, cluster and visualize the results. In addition, the Bayesian motif discovery program THEME can be used to determine the class of transcription factors that is most likely to regulate a set of sequences. Input can be provided as a list of gene or probe identifiers. Used with the default settings, WebMOTIFS accurately identifies biologically relevant motifs from diverse data in several species. WebMOTIFS is freely available at http://fraenkel.mit.edu/webmotifs. PMID:17584794

  14. Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra.

    PubMed

    Rieder, Vera; Schork, Karin U; Kerschke, Laura; Blank-Landeshammer, Bernhard; Sickmann, Albert; Rahnenführer, Jörg

    2017-11-03

    In proteomics, liquid chromatography-tandem mass spectrometry (LC-MS/MS) is established for identifying peptides and proteins. Duplicated spectra, that is, multiple spectra of the same peptide, occur both in single MS/MS runs and in large spectral libraries. Clustering tandem mass spectra is used to find consensus spectra, with manifold applications. First, it speeds up database searches, as performed for instance by Mascot. Second, it helps to identify novel peptides across species. Third, it is used for quality control to detect wrongly annotated spectra. We compare different clustering algorithms based on the cosine distance between spectra. CAST, MS-Cluster, and PRIDE Cluster are popular algorithms to cluster tandem mass spectra. We add well-known algorithms for large data sets, hierarchical clustering, DBSCAN, and connected components of a graph, as well as the new method N-Cluster. All algorithms are evaluated on real data with varied parameter settings. Cluster results are compared with each other and with peptide annotations based on validation measures such as purity. Quality control, regarding the detection of wrongly (un)annotated spectra, is discussed for exemplary resulting clusters. N-Cluster proves to be highly competitive. All clustering results benefit from the so-called DISMS2 filter that integrates additional information, for example, on precursor mass.

  15. A Bayesian network analysis of posttraumatic stress disorder symptoms in adults reporting childhood sexual abuse

    PubMed Central

    McNally, Richard J.; Heeren, Alexandre; Robinaugh, Donald J.

    2017-01-01

    ABSTRACT Background: The network approach to mental disorders offers a novel framework for conceptualizing posttraumatic stress disorder (PTSD) as a causal system of interacting symptoms. Objective: In this study, we extended this work by estimating the structure of relations among PTSD symptoms in adults reporting personal histories of childhood sexual abuse (CSA; N = 179).   Method: We employed two complementary methods. First, using the graphical LASSO, we computed a sparse, regularized partial correlation network revealing associations (edges) between pairs of PTSD symptoms (nodes). Next, using a Bayesian approach, we computed a directed acyclic graph (DAG) to estimate a directed, potentially causal model of the relations among symptoms. Results: For the first network, we found that physiological reactivity to reminders of trauma, dreams about the trauma, and lost of interest in previously enjoyed activities were highly central nodes. However, stability analyses suggest that these findings were unstable across subsets of our sample. The DAG suggests that becoming physiologically reactive and upset in response to reminders of the trauma may be key drivers of other symptoms in adult survivors of CSA. Conclusions: Our study illustrates the strengths and limitations of these network analytic approaches to PTSD. PMID:29038690

  16. Value-based customer grouping from large retail data sets

    NASA Astrophysics Data System (ADS)

    Strehl, Alexander; Ghosh, Joydeep

    2000-04-01

    In this paper, we propose OPOSSUM, a novel similarity-based clustering algorithm using constrained, weighted graph- partitioning. Instead of binary presence or absence of products in a market-basket, we use an extended 'revenue per product' measure to better account for management objectives. Typically the number of clusters desired in a database marketing application is only in the teens or less. OPOSSUM proceeds top-down, which is more efficient and takes a small number of steps to attain the desired number of clusters as compared to bottom-up agglomerative clustering approaches. OPOSSUM delivers clusters that are balanced in terms of either customers (samples) or revenue (value). To facilitate data exploration and validation of results we introduce CLUSION, a visualization toolkit for high-dimensional clustering problems. To enable closed loop deployment of the algorithm, OPOSSUM has no user-specified parameters. Thresholding heuristics are avoided and the optimal number of clusters is automatically determined by a search for maximum performance. Results are presented on a real retail industry data-set of several thousand customers and products, to demonstrate the power of the proposed technique.

  17. Epidemic Threshold in Structured Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    EguíLuz, VíCtor M.; Klemm, Konstantin

    2002-08-01

    We analyze the spreading of viruses in scale-free networks with high clustering and degree correlations, as found in the Internet graph. For the susceptible-infected-susceptible model of epidemics the prevalence undergoes a phase transition at a finite threshold of the transmission probability. Comparing with the absence of a finite threshold in networks with purely random wiring, our result suggests that high clustering (modularity) and degree correlations protect scale-free networks against the spreading of viruses. We introduce and verify a quantitative description of the epidemic threshold based on the connectivity of the neighborhoods of the hubs.

  18. Cluster analysis in systems of magnetic spheres and cubes

    NASA Astrophysics Data System (ADS)

    Pyanzina, E. S.; Gudkova, A. V.; Donaldson, J. G.; Kantorovich, S. S.

    2017-06-01

    In the present work we use molecular dynamics simulations and graph-theory based cluster analysis to compare self-assembly in systems of magnetic spheres, and cubes where the dipole moment is oriented along the side of the cube in the [001] crystallographic direction. We show that under the same conditions cubes aggregate far less than their spherical counterparts. This difference can be explained in terms of the volume of phase space in which the formation of the bond is thermodynamically advantageous. It follows that this volume is much larger for a dipolar sphere than for a dipolar cube.

  19. An In-Depth Analysis of the Chung-Lu Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winlaw, M.; DeSterck, H.; Sanders, G.

    2015-10-28

    In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less

  20. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    PubMed

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  1. Estimating relative risks in multicenter studies with a small number of centers - which methods to use? A simulation study.

    PubMed

    Pedroza, Claudia; Truong, Van Thi Thanh

    2017-11-02

    Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.

  2. Irreversibility of financial time series: A graph-theoretical approach

    NASA Astrophysics Data System (ADS)

    Flanagan, Ryan; Lacasa, Lucas

    2016-04-01

    The relation between time series irreversibility and entropy production has been recently investigated in thermodynamic systems operating away from equilibrium. In this work we explore this concept in the context of financial time series. We make use of visibility algorithms to quantify, in graph-theoretical terms, time irreversibility of 35 financial indices evolving over the period 1998-2012. We show that this metric is complementary to standard measures based on volatility and exploit it to both classify periods of financial stress and to rank companies accordingly. We then validate this approach by finding that a projection in principal components space of financial years, based on time irreversibility features, clusters together periods of financial stress from stable periods. Relations between irreversibility, efficiency and predictability are briefly discussed.

  3. CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks.

    PubMed

    Li, Min; Li, Dongyan; Tang, Yu; Wu, Fangxiang; Wang, Jianxin

    2017-08-31

    Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster.

  4. CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks

    PubMed Central

    Li, Min; Li, Dongyan; Tang, Yu; Wang, Jianxin

    2017-01-01

    Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster. PMID:28858211

  5. Altered brain network measures in patients with primary writing tremor.

    PubMed

    Lenka, Abhishek; Jhunjhunwala, Ketan Ramakant; Panda, Rajanikant; Saini, Jitender; Bharath, Rose Dawn; Yadav, Ravi; Pal, Pramod Kumar

    2017-10-01

    Primary writing tremor (PWT) is a rare task-specific tremor, which occurs only while writing or while adopting the hand in the writing position. The basic pathophysiology of PWT has not been fully understood. The objective of this study is to explore the alterations in the resting state functional brain connectivity, if any, in patients with PWT using graph theory-based analysis. This prospective case-control study included 10 patients with PWT and 10 age and gender matched healthy controls. All subjects underwent MRI in a 3-Tesla scanner. Several parameters of small-world functional connectivity were compared between patients and healthy controls by using graph theory-based analysis. There were no significant differences in age, handedness (all right handed), gender distribution (all were males), and MMSE scores between the patients and controls. The mean age at presentation of tremor in the patient group was 51.7 ± 8.6 years, and the mean duration of tremor was 3.5 ± 1.9 years. Graph theory-based analysis revealed that patients with PWT had significantly lower clustering coefficient and higher path length compared to healthy controls suggesting alterations in small-world architecture of the brain. The clustering coefficients were lower in PWT patients in left and right medial cerebellum, right dorsolateral prefrontal cortex (DLPFC), and left posterior parietal cortex (PPC). Patients with PWT have significantly altered small-world brain connectivity in bilateral medial cerebellum, right DLPFC, and left PPC. Further studies with larger sample size are required to confirm our results.

  6. Resting-state theta band connectivity and graph analysis in generalized social anxiety disorder.

    PubMed

    Xing, Mengqi; Tadayonnejad, Reza; MacNamara, Annmarie; Ajilore, Olusola; DiGangi, Julia; Phan, K Luan; Leow, Alex; Klumpp, Heide

    2017-01-01

    Functional magnetic resonance imaging (fMRI) resting-state studies show generalized social anxiety disorder (gSAD) is associated with disturbances in networks involved in emotion regulation, emotion processing, and perceptual functions, suggesting a network framework is integral to elucidating the pathophysiology of gSAD. However, fMRI does not measure the fast dynamic interconnections of functional networks. Therefore, we examined whole-brain functional connectomics with electroencephalogram (EEG) during resting-state. Resting-state EEG data was recorded for 32 patients with gSAD and 32 demographically-matched healthy controls (HC). Sensor-level connectivity analysis was applied on EEG data by using Weighted Phase Lag Index (WPLI) and graph analysis based on WPLI was used to determine clustering coefficient and characteristic path length to estimate local integration and global segregation of networks. WPLI results showed increased oscillatory midline coherence in the theta frequency band indicating higher connectivity in the gSAD relative to HC group during rest. Additionally, WPLI values positively correlated with state anxiety levels within the gSAD group but not the HC group. Our graph theory based connectomics analysis demonstrated increased clustering coefficient and decreased characteristic path length in theta-based whole brain functional organization in subjects with gSAD compared to HC. Theta-dependent interconnectivity was associated with state anxiety in gSAD and an increase in information processing efficiency in gSAD (compared to controls). Results may represent enhanced baseline self-focused attention, which is consistent with cognitive models of gSAD and fMRI studies implicating emotion dysregulation and disturbances in task negative networks (e.g., default mode network) in gSAD.

  7. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  8. What Can Graph Theory Tell Us About Word Learning and Lexical Retrieval?

    PubMed Central

    Vitevitch, Michael S.

    2008-01-01

    Purpose Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Method Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. Results The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. Conclusions The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing. PMID:18367686

  9. A graph-theory framework for evaluating landscape connectivity and conservation planning.

    PubMed

    Minor, Emily S; Urban, Dean L

    2008-04-01

    Connectivity of habitat patches is thought to be important for movement of genes, individuals, populations, and species over multiple temporal and spatial scales. We used graph theory to characterize multiple aspects of landscape connectivity in a habitat network in the North Carolina Piedmont (U.S.A). We compared this landscape with simulated networks with known topology, resistance to disturbance, and rate of movement. We introduced graph measures such as compartmentalization and clustering, which can be used to identify locations on the landscape that may be especially resilient to human development or areas that may be most suitable for conservation. Our analyses indicated that for songbirds the Piedmont habitat network was well connected. Furthermore, the habitat network had commonalities with planar networks, which exhibit slow movement, and scale-free networks, which are resistant to random disturbances. These results suggest that connectivity in the habitat network was high enough to prevent the negative consequences of isolation but not so high as to allow rapid spread of disease. Our graph-theory framework provided insight into regional and emergent global network properties in an intuitive and visual way and allowed us to make inferences about rates and paths of species movements and vulnerability to disturbance. This approach can be applied easily to assessing habitat connectivity in any fragmented or patchy landscape.

  10. Data visualization, bar naked: A free tool for creating interactive graphics.

    PubMed

    Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M

    2017-12-15

    Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. Scale free effects in world currency exchange network

    NASA Astrophysics Data System (ADS)

    Górski, A. Z.; Drożdż, S.; Kwapień, J.

    2008-11-01

    A large collection of daily time series for 60 world currencies' exchange rates is considered. The correlation matrices are calculated and the corresponding Minimal Spanning Tree (MST) graphs are constructed for each of those currencies used as reference for the remaining ones. It is shown that multiplicity of the MST graphs' nodes to a good approximation develops a power like, scale free distribution with the scaling exponent similar as for several other complex systems studied so far. Furthermore, quantitative arguments in favor of the hierarchical organization of the world currency exchange network are provided by relating the structure of the above MST graphs and their scaling exponents to those that are derived from an exactly solvable hierarchical network model. A special status of the USD during the period considered can be attributed to some departures of the MST features, when this currency (or some other tied to it) is used as reference, from characteristics typical to such a hierarchical clustering of nodes towards those that correspond to the random graphs. Even though in general the basic structure of the MST is robust with respect to changing the reference currency some trace of a systematic transition from somewhat dispersed - like the USD case - towards more compact MST topology can be observed when correlations increase.

  12. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning

    PubMed Central

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M.; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss. PMID:27807415

  13. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning.

    PubMed

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss.

  14. Optimization of a sensor cluster for determination of trajectories and velocities of supersonic objects

    NASA Astrophysics Data System (ADS)

    Cannella, Marco; Sciuto, Salvatore Andrea

    2001-04-01

    An evaluation of errors for a method for determination of trajectories and velocities of supersonic objects is conducted. The analytical study of a cluster, composed of three pressure transducers and generally used as an apparatus for cinematic determination of parameters of supersonic objects, is developed. Furthermore, detailed investigation into the accuracy of this cluster on determination of the slope of an incoming shock wave is carried out for optimization of the device. In particular, a specific non-dimensional parameter is proposed in order to evaluate accuracies for various values of parameters and reference graphs are provided in order to properly design the sensor cluster. Finally, on the basis of the error analysis conducted, a discussion on the best estimation of the relative distance for the sensor as a function of temporal resolution of the measuring system is presented.

  15. Ontology-based topic clustering for online discussion data

    NASA Astrophysics Data System (ADS)

    Wang, Yongheng; Cao, Kening; Zhang, Xiaoming

    2013-03-01

    With the rapid development of online communities, mining and extracting quality knowledge from online discussions becomes very important for the industrial and marketing sector, as well as for e-commerce applications and government. Most of the existing techniques model a discussion as a social network of users represented by a user-based graph without considering the content of the discussion. In this paper we propose a new multilayered mode to analysis online discussions. The user-based and message-based representation is combined in this model. A novel frequent concept sets based clustering method is used to cluster the original online discussion network into topic space. Domain ontology is used to improve the clustering accuracy. Parallel methods are also used to make the algorithms scalable to very large data sets. Our experimental study shows that the model and algorithms are effective when analyzing large scale online discussion data.

  16. Assessing population genetic structure via the maximisation of genetic distance

    PubMed Central

    2009-01-01

    Background The inference of the hidden structure of a population is an essential issue in population genetics. Recently, several methods have been proposed to infer population structure in population genetics. Methods In this study, a new method to infer the number of clusters and to assign individuals to the inferred populations is proposed. This approach does not make any assumption on Hardy-Weinberg and linkage equilibrium. The implemented criterion is the maximisation (via a simulated annealing algorithm) of the averaged genetic distance between a predefined number of clusters. The performance of this method is compared with two Bayesian approaches: STRUCTURE and BAPS, using simulated data and also a real human data set. Results The simulations show that with a reduced number of markers, BAPS overestimates the number of clusters and presents a reduced proportion of correct groupings. The accuracy of the new method is approximately the same as for STRUCTURE. Also, in Hardy-Weinberg and linkage disequilibrium cases, BAPS performs incorrectly. In these situations, STRUCTURE and the new method show an equivalent behaviour with respect to the number of inferred clusters, although the proportion of correct groupings is slightly better with the new method. Re-establishing equilibrium with the randomisation procedures improves the precision of the Bayesian approaches. All methods have a good precision for FST ≥ 0.03, but only STRUCTURE estimates the correct number of clusters for FST as low as 0.01. In situations with a high number of clusters or a more complex population structure, MGD performs better than STRUCTURE and BAPS. The results for a human data set analysed with the new method are congruent with the geographical regions previously found. Conclusion This new method used to infer the hidden structure in a population, based on the maximisation of the genetic distance and not taking into consideration any assumption about Hardy-Weinberg and linkage equilibrium, performs well under different simulated scenarios and with real data. Therefore, it could be a useful tool to determine genetically homogeneous groups, especially in those situations where the number of clusters is high, with complex population structure and where Hardy-Weinberg and/or linkage equilibrium are present. PMID:19900278

  17. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.

  18. Evidence of new species for malaria vector Anopheles nuneztovari sensu lato in the Brazilian Amazon region.

    PubMed

    Scarpassa, Vera Margarete; Cunha-Machado, Antonio Saulo; Saraiva, José Ferreira

    2016-04-12

    Anopheles nuneztovari sensu lato comprises cryptic species in northern South America, and the Brazilian populations encompass distinct genetic lineages within the Brazilian Amazon region. This study investigated, based on two molecular markers, whether these lineages might actually deserve species status. Specimens were collected in five localities of the Brazilian Amazon, including Manaus, Careiro Castanho and Autazes, in the State of Amazonas; Tucuruí, in the State of Pará; and Abacate da Pedreira, in the State of Amapá, and analysed for the COI gene (Barcode region) and 12 microsatellite loci. Phylogenetic analyses were performed using the maximum likelihood (ML) approach. Intra and inter samples genetic diversity were estimated using population genetics analyses, and the genetic groups were identified by means of the ML, Bayesian and factorial correspondence analyses and the Bayesian analysis of population structure. The Barcode region dataset (N = 103) generated 27 haplotypes. The haplotype network suggested three lineages. The ML tree retrieved five monophyletic groups. Group I clustered all specimens from Manaus and Careiro Castanho, the majority of Autazes and a few from Abacate da Pedreira. Group II clustered most of the specimens from Abacate da Pedreira and a few from Autazes and Tucuruí. Group III clustered only specimens from Tucuruí (lineage III), strongly supported (97 %). Groups IV and V clustered specimens of A. nuneztovari s.s. and A. dunhami, strongly (98 %) and weakly (70 %) supported, respectively. In the second phylogenetic analysis, the sequences from GenBank, identified as A. goeldii, clustered to groups I and II, but not to group III. Genetic distances (Kimura-2 parameters) among the groups ranged from 1.60 % (between I and II) to 2.32 % (between I and III). Microsatellite data revealed very high intra-population genetic variability. Genetic distances showed the highest and significant values (P = 0.005) between Tucuruí and all the other samples, and between Abacate da Pedreira and all the other samples. Genetic distances, Bayesian (Structure and BAPS) analyses and FCA suggested three distinct biological groups, supporting the barcode region results. The two markers revealed three genetic lineages for A. nuneztovari s.l. in the Brazilian Amazon region. Lineages I and II may represent genetically distinct groups or species within A. goeldii. Lineage III may represent a new species, distinct from the A. goeldii group, and may be the most ancestral in the Brazilian Amazon. They may have differences in Plasmodium susceptibility and should therefore be investigated further.

  19. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.

  20. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.

    PubMed

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-23

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.

  1. A Bayesian Method for Evaluating and Discovering Disease Loci Associations

    PubMed Central

    Jiang, Xia; Barmada, M. Michael; Cooper, Gregory F.; Becich, Michael J.

    2011-01-01

    Background A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. PMID:21853025

  2. Looking for robust properties in the growth of an academic network: the case of the Uruguayan biological research community.

    PubMed

    Cabana, Alvaro; Mizraji, Eduardo; Pomi, Andrés; Valle-Lisboa, Juan Carlos

    2008-04-01

    Graph-theoretical methods have recently been used to analyze certain properties of natural and social networks. In this work, we have investigated the early stages in the growth of a Uruguayan academic network, the Biology Area of the Programme for the Development of Basic Science (PEDECIBA). This transparent social network is a territory for the exploration of the reliability of clustering methods that can potentially be used when we are confronted with opaque natural systems that provide us with a limited spectrum of observables (happens in research on the relations between brain, thought and language). From our social net, we constructed two different graph representations based on the relationships among researchers revealed by their co-participation in Master's thesis committees. We studied these networks at different times and found that they achieve connectedness early in their evolution and exhibit the small-world property (i.e. high clustering with short path lengths). The data seem compatible with power law distributions of connectivity, clustering coefficients and betweenness centrality. Evidence of preferential attachment of new nodes and of new links between old nodes was also found in both representations. These results suggest that there are topological properties observed throughout the growth of the network that do not depend on the representations we have chosen but reflect intrinsic properties of the academic collective under study. Researchers in PEDECIBA are classified according to their specialties. We analysed the community structure detected by a standard algorithm in both representations. We found that much of the pre-specified structure is recovered and part of the mismatches can be attributed to convergent interests between scientists from different sub-disciplines. This result shows the potentiality of some clustering methods for the analysis of partially known natural systems.

  3. Towards Breaking the Histone Code – Bayesian Graphical Models for Histone Modifications

    PubMed Central

    Mitra, Riten; Müller, Peter; Liang, Shoudan; Xu, Yanxun; Ji, Yuan

    2013-01-01

    Background Histones are proteins that wrap DNA around in small spherical structures called nucleosomes. Histone modifications (HMs) refer to the post-translational modifications to the histone tails. At a particular genomic locus, each of these HMs can either be present or absent, and the combinatory patterns of the presence or absence of multiple HMs, or the ‘histone codes,’ are believed to co-regulate important biological processes. We aim to use raw data on HM markers at different genomic loci to (1) decode the complex biological network of HMs in a single region and (2) demonstrate how the HM networks differ in different regulatory regions. We suggest that these differences in network attributes form a significant link between histones and genomic functions. Methods and Results We develop a powerful graphical model under Bayesian paradigm. Posterior inference is fully probabilistic, allowing us to compute the probabilities of distinct dependence patterns of the HMs using graphs. Furthermore, our model-based framework allows for easy but important extensions for inference on differential networks under various conditions, such as the different annotations of the genomic locations (e.g., promoters versus insulators). We applied these models to ChIP-Seq data based on CD4+ T lymphocytes. The results confirmed many existing findings and provided a unified tool to generate various promising hypotheses. Differential network analyses revealed new insights on co-regulation of HMs of transcriptional activities in different genomic regions. Conclusions The use of Bayesian graphical models and borrowing strength across different conditions provide high power to infer histone networks and their differences. PMID:23748248

  4. A Bayesian Approach to Real-Time Earthquake Phase Association

    NASA Astrophysics Data System (ADS)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  5. Effective connectivity: Influence, causality and biophysical modeling

    PubMed Central

    Valdes-Sosa, Pedro A.; Roebroeck, Alard; Daunizeau, Jean; Friston, Karl

    2011-01-01

    This is the final paper in a Comments and Controversies series dedicated to “The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution”. We argue that discovering effective connectivity depends critically on state-space models with biophysically informed observation and state equations. These models have to be endowed with priors on unknown parameters and afford checks for model Identifiability. We consider the similarities and differences among Dynamic Causal Modeling, Granger Causal Modeling and other approaches. We establish links between past and current statistical causal modeling, in terms of Bayesian dependency graphs and Wiener–Akaike–Granger–Schweder influence measures. We show that some of the challenges faced in this field have promising solutions and speculate on future developments. PMID:21477655

  6. "K"-Balance Partitioning: An Exact Method with Applications to Generalized Structural Balance and Other Psychological Contexts

    ERIC Educational Resources Information Center

    Brusco, Michael; Steinley, Douglas

    2010-01-01

    Structural balance theory (SBT) has maintained a venerable status in the psychological literature for more than 5 decades. One important problem pertaining to SBT is the approximation of structural or generalized balance via the partitioning of the vertices of a signed graph into "K" clusters. This "K"-balance partitioning problem also has more…

  7. A Probabilistic Atlas of Diffuse WHO Grade II Glioma Locations in the Brain

    PubMed Central

    Baumann, Cédric; Zouaoui, Sonia; Yordanova, Yordanka; Blonski, Marie; Rigau, Valérie; Chemouny, Stéphane; Taillandier, Luc; Bauchet, Luc; Duffau, Hugues; Paragios, Nikos

    2016-01-01

    Diffuse WHO grade II gliomas are diffusively infiltrative brain tumors characterized by an unavoidable anaplastic transformation. Their management is strongly dependent on their location in the brain due to interactions with functional regions and potential differences in molecular biology. In this paper, we present the construction of a probabilistic atlas mapping the preferential locations of diffuse WHO grade II gliomas in the brain. This is carried out through a sparse graph whose nodes correspond to clusters of tumors clustered together based on their spatial proximity. The interest of such an atlas is illustrated via two applications. The first one correlates tumor location with the patient’s age via a statistical analysis, highlighting the interest of the atlas for studying the origins and behavior of the tumors. The second exploits the fact that the tumors have preferential locations for automatic segmentation. Through a coupled decomposed Markov Random Field model, the atlas guides the segmentation process, and characterizes which preferential location the tumor belongs to and consequently which behavior it could be associated to. Leave-one-out cross validation experiments on a large database highlight the robustness of the graph, and yield promising segmentation results. PMID:26751577

  8. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    PubMed

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  9. Rewiring the network. What helps an innovation to diffuse?

    NASA Astrophysics Data System (ADS)

    Sznajd-Weron, Katarzyna; Szwabiński, Janusz; Weron, Rafał; Weron, Tomasz

    2014-03-01

    A fundamental question related to innovation diffusion is how the structure of the social network influences the process. Empirical evidence regarding real-world networks of influence is very limited. On the other hand, agent-based modeling literature reports different, and at times seemingly contradictory, results. In this paper we study innovation diffusion processes for a range of Watts-Strogatz networks in an attempt to shed more light on this problem. Using the so-called Sznajd model as the backbone of opinion dynamics, we find that the published results are in fact consistent and allow us to predict the role of network topology in various situations. In particular, the diffusion of innovation is easier on more regular graphs, i.e. with a higher clustering coefficient. Moreover, in the case of uncertainty—which is particularly high for innovations connected to public health programs or ecological campaigns—a more clustered network will help the diffusion. On the other hand, when social influence is less important (i.e. in the case of perfect information), a shorter path will help the innovation to spread in the society and—as a result—the diffusion will be easiest on a random graph.

  10. Applications of graph theory to landscape genetics

    PubMed Central

    Garroway, Colin J; Bowman, Jeff; Carr, Denis; Wilson, Paul J

    2008-01-01

    We investigated the relationships among landscape quality, gene flow, and population genetic structure of fishers (Martes pennanti) in ON, Canada. We used graph theory as an analytical framework considering each landscape as a network node. The 34 nodes were connected by 93 edges. Network structure was characterized by a higher level of clustering than expected by chance, a short mean path length connecting all pairs of nodes, and a resiliency to the loss of highly connected nodes. This suggests that alleles can be efficiently spread through the system and that extirpations and conservative harvest are not likely to affect their spread. Two measures of node centrality were negatively related to both the proportion of immigrants in a node and node snow depth. This suggests that central nodes are producers of emigrants, contain high-quality habitat (i.e., deep snow can make locomotion energetically costly) and that fishers were migrating from high to low quality habitat. A method of community detection on networks delineated five genetic clusters of nodes suggesting cryptic population structure. Our analyses showed that network models can provide system-level insight into the process of gene flow with implications for understanding how landscape alterations might affect population fitness and evolutionary potential. PMID:25567802

  11. Cluster structure of EU-15 countries derived from the correlation matrix analysis of macroeconomic index fluctuations

    NASA Astrophysics Data System (ADS)

    Gligor, M.; Ausloos, M.

    2007-05-01

    The statistical distances between countries, calculated for various moving average time windows, are mapped into the ultrametric subdominant space as in classical Minimal Spanning Tree methods. The Moving Average Minimal Length Path (MAMLP) algorithm allows a decoupling of fluctuations with respect to the mass center of the system from the movement of the mass center itself. A Hamiltonian representation given by a factor graph is used and plays the role of cost function. The present analysis pertains to 11 macroeconomic (ME) indicators, namely the GDP (x1), Final Consumption Expenditure (x2), Gross Capital Formation (x3), Net Exports (x4), Consumer Price Index (y1), Rates of Interest of the Central Banks (y2), Labour Force (z1), Unemployment (z2), GDP/hour worked (z3), GDP/capita (w1) and Gini coefficient (w2). The target group of countries is composed of 15 EU countries, data taken between 1995 and 2004. By two different methods (the Bipartite Factor Graph Analysis and the Correlation Matrix Eigensystem Analysis) it is found that the strongly correlated countries with respect to the macroeconomic indicators fluctuations can be partitioned into stable clusters.

  12. Phytoplankton community in lake Ebony, Pantai Indah Kapuk, North Jakarta

    NASA Astrophysics Data System (ADS)

    Pratiwi, NTM; Ayu, IP; Hariyadi, S.; Mulyawati, D.; Iswantari, A.

    2018-05-01

    Lake Ebony is an ornamental lake in coastal area of North Jakarta, located at 6°6’18”S- 6°6’35”S and 106°44’39’Έ-106°44’56’Έ. Phytoplankton community in Lake Ebony lives in high organic materials received from domestic waste. A spatio-temporal observation at five sites was carried out to understand the spatial distribution of phytoplankton at each group of time of observation and the succession of phytoplankton. Spatial analysis was carried out to map the distribution pattern of plankton,using ArcGIS 10.1 with IDW (Inverse Distance Weighted) interpolation method. Spatial clustering was determined by Canberra Index. The succession of phytoplankton was shown by graph of Frontier succession models, SDI (rate of succession), and SIMI. There were two clustered groups of site. Based on graph of Frontier succession, phytoplankton in Lake Ebony was at Stage 2 and 3 with the rate of succession ranged from 0.008 to 0.003, and value of SIMI ranged from 0.68 to 0.97. There was different spatial distribution pattern of phytoplankton in three groups of observation time, with low rate of succession.

  13. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  14. Listing triangles in expected linear time on a class of power law graphs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann

    Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less

  15. RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.

    PubMed

    Carlis, John; Bruso, Kelsey

    2012-03-01

    Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing.

  16. RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT

    PubMed Central

    Bruso, Kelsey

    2012-01-01

    Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n2) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923

  17. Simultaneous Force Regression and Movement Classification of Fingers via Surface EMG within a Unified Bayesian Framework.

    PubMed

    Baldacchino, Tara; Jacobs, William R; Anderson, Sean R; Worden, Keith; Rowson, Jennifer

    2018-01-01

    This contribution presents a novel methodology for myolectric-based control using surface electromyographic (sEMG) signals recorded during finger movements. A multivariate Bayesian mixture of experts (MoE) model is introduced which provides a powerful method for modeling force regression at the fingertips, while also performing finger movement classification as a by-product of the modeling algorithm. Bayesian inference of the model allows uncertainties to be naturally incorporated into the model structure. This method is tested using data from the publicly released NinaPro database which consists of sEMG recordings for 6 degree-of-freedom force activations for 40 intact subjects. The results demonstrate that the MoE model achieves similar performance compared to the benchmark set by the authors of NinaPro for finger force regression. Additionally, inherent to the Bayesian framework is the inclusion of uncertainty in the model parameters, naturally providing confidence bounds on the force regression predictions. Furthermore, the integrated clustering step allows a detailed investigation into classification of the finger movements, without incurring any extra computational effort. Subsequently, a systematic approach to assessing the importance of the number of electrodes needed for accurate control is performed via sensitivity analysis techniques. A slight degradation in regression performance is observed for a reduced number of electrodes, while classification performance is unaffected.

  18. Predicting protein complexes from weighted protein-protein interaction graphs with a novel unsupervised methodology: Evolutionary enhanced Markov clustering.

    PubMed

    Theofilatos, Konstantinos; Pavlopoulou, Niki; Papasavvas, Christoforos; Likothanassis, Spiros; Dimitrakopoulos, Christos; Georgopoulos, Efstratios; Moschopoulos, Charalampos; Mavroudi, Seferina

    2015-03-01

    Proteins are considered to be the most important individual components of biological systems and they combine to form physical protein complexes which are responsible for certain molecular functions. Despite the large availability of protein-protein interaction (PPI) information, not much information is available about protein complexes. Experimental methods are limited in terms of time, efficiency, cost and performance constraints. Existing computational methods have provided encouraging preliminary results, but they phase certain disadvantages as they require parameter tuning, some of them cannot handle weighted PPI data and others do not allow a protein to participate in more than one protein complex. In the present paper, we propose a new fully unsupervised methodology for predicting protein complexes from weighted PPI graphs. The proposed methodology is called evolutionary enhanced Markov clustering (EE-MC) and it is a hybrid combination of an adaptive evolutionary algorithm and a state-of-the-art clustering algorithm named enhanced Markov clustering. EE-MC was compared with state-of-the-art methodologies when applied to datasets from the human and the yeast Saccharomyces cerevisiae organisms. Using public available datasets, EE-MC outperformed existing methodologies (in some datasets the separation metric was increased by 10-20%). Moreover, when applied to new human datasets its performance was encouraging in the prediction of protein complexes which consist of proteins with high functional similarity. In specific, 5737 protein complexes were predicted and 72.58% of them are enriched for at least one gene ontology (GO) function term. EE-MC is by design able to overcome intrinsic limitations of existing methodologies such as their inability to handle weighted PPI networks, their constraint to assign every protein in exactly one cluster and the difficulties they face concerning the parameter tuning. This fact was experimentally validated and moreover, new potentially true human protein complexes were suggested as candidates for further validation using experimental techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  20. Spatial clustering of average risks and risk trends in Bayesian disease mapping.

    PubMed

    Anderson, Craig; Lee, Duncan; Dean, Nema

    2017-01-01

    Spatiotemporal disease mapping focuses on estimating the spatial pattern in disease risk across a set of nonoverlapping areal units over a fixed period of time. The key aim of such research is to identify areas that have a high average level of disease risk or where disease risk is increasing over time, thus allowing public health interventions to be focused on these areas. Such aims are well suited to the statistical approach of clustering, and while much research has been done in this area in a purely spatial setting, only a handful of approaches have focused on spatiotemporal clustering of disease risk. Therefore, this paper outlines a new modeling approach for clustering spatiotemporal disease risk data, by clustering areas based on both their mean risk levels and the behavior of their temporal trends. The efficacy of the methodology is established by a simulation study, and is illustrated by a study of respiratory disease risk in Glasgow, Scotland. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Top