Adaptive sampling in behavioral surveys.
Thompson, S K
1997-01-01
Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.
Physical Samples Linked Data in Action
NASA Astrophysics Data System (ADS)
Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.
2017-12-01
Most data and metadata related to physical samples currently reside in isolated relational databases driven by diverse data models. How to approach the challenge for sharing, interchanging and integrating data from these difference relational databases motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). In last few years, we have released four knowledge graphs concentrated on physical samples, including System for Earth Sample Registration (SESAR), USGS National Geochemical Database (NGDC), Ocean Biogeographic Information System (OBIS), and Earthchem Database. Currently the four knowledge graphs contain over 12 million facets (triples) about objects of interest to the geoscience domain. Choosing appropriate domain ontologies for representing context of data is the core of the whole work. Geolink ontology developed by Earthcube Geolink project was used as top level to represent common concepts like person, organization, cruise, etc. Physical sample ontology developed by Interdisciplinary Earth Data Alliance (IEDA) and Darwin Core vocabulary were used as second level to describe details about geological samples and biological diversity. We also focused on finding and building best tool chains to support the whole life cycle of publishing linked data we have, including information retrieval, linked data browsing and data visualization. Currently, Morph, Virtuoso Server, LodView, LodLive, and YASGUI were employed for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Persistent digital identifier is another main point we concentrated on. Open Researcher & Contributor IDs (ORCIDs), International Geo Sample Numbers (IGSNs), Global Research Identifier Database (GRID) and other persistent identifiers were used to link different resources from various graphs with person, sample, organization, cruise, etc. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).
A Collection of Features for Semantic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliassi-Rad, T; Fodor, I K; Gallagher, B
2007-05-02
Semantic graphs are commonly used to represent data from one or more data sources. Such graphs extend traditional graphs by imposing types on both nodes and links. This type information defines permissible links among specified nodes and can be represented as a graph commonly referred to as an ontology or schema graph. Figure 1 depicts an ontology graph for data from National Association of Securities Dealers. Each node type and link type may also have a list of attributes. To capture the increased complexity of semantic graphs, concepts derived for standard graphs have to be extended. This document explains brieflymore » features commonly used to characterize graphs, and their extensions to semantic graphs. This document is divided into two sections. Section 2 contains the feature descriptions for static graphs. Section 3 extends the features for semantic graphs that vary over time.« less
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-17
A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also http://www.genostar.org.
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo1, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-01
Background A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. Results GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. Conclusion GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also . PMID:16417636
The combination of direct and paired link graphs can boost repetitive genome assembly
Shi, Wenyu; Ji, Peifeng
2017-01-01
Abstract Currently, most paired link based scaffolding algorithms intrinsically mask the sequences between two linked contigs and bypass their direct link information embedded in the original de Bruijn assembly graph. Such disadvantage substantially complicates the scaffolding process and leads to the inability of resolving repetitive contig assembly. Here we present a novel algorithm, inGAP-sf, for effectively generating high-quality and continuous scaffolds. inGAP-sf achieves this by using a new strategy based on the combination of direct link and paired link graphs, in which direct link is used to increase graph connectivity and to decrease graph complexity and paired link is employed to supervise the traversing process on the direct link graph. Such advantage greatly facilitates the assembly of short-repeat enriched regions. Moreover, a new comprehensive decision model is developed to eliminate the noise routes accompanying with the introduced direct link. Through extensive evaluations on both simulated and real datasets, we demonstrated that inGAP-sf outperforms most of the genome scaffolding algorithms by generating more accurate and continuous assembly, especially for short repetitive regions. PMID:27924003
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Proving relations between modular graph functions
NASA Astrophysics Data System (ADS)
Basu, Anirban
2016-12-01
We consider modular graph functions that arise in the low energy expansion of the four graviton amplitude in type II string theory. The vertices of these graphs are the positions of insertions of vertex operators on the toroidal worldsheet, while the links are the scalar Green functions connecting the vertices. Graphs with four and five links satisfy several non-trivial relations, which have been proved recently. We prove these relations by using elementary properties of Green functions and the details of the graphs. We also prove a relation between modular graph functions with six links.
Knowledge Representation Issues in Semantic Graphs for Relationship Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barthelemy, M; Chow, E; Eliassi-Rad, T
2005-02-02
An important task for Homeland Security is the prediction of threat vulnerabilities, such as through the detection of relationships between seemingly disjoint entities. A structure used for this task is a ''semantic graph'', also known as a ''relational data graph'' or an ''attributed relational graph''. These graphs encode relationships as typed links between a pair of typed nodes. Indeed, semantic graphs are very similar to semantic networks used in AI. The node and link types are related through an ontology graph (also known as a schema). Furthermore, each node has a set of attributes associated with it (e.g., ''age'' maymore » be an attribute of a node of type ''person''). Unfortunately, the selection of types and attributes for both nodes and links depends on human expertise and is somewhat subjective and even arbitrary. This subjectiveness introduces biases into any algorithm that operates on semantic graphs. Here, we raise some knowledge representation issues for semantic graphs and provide some possible solutions using recently developed ideas in the field of complex networks. In particular, we use the concept of transitivity to evaluate the relevance of individual links in the semantic graph for detecting relationships. We also propose new statistical measures for semantic graphs and illustrate these semantic measures on graphs constructed from movies and terrorism data.« less
Developing and evaluating Quilts for the depiction of large layered graphs.
Bae, Juhee; Watson, Ben
2011-12-01
Traditional layered graph depictions such as flow charts are in wide use. Yet as graphs grow more complex, these depictions can become difficult to understand. Quilts are matrix-based depictions for layered graphs designed to address this problem. In this research, we first improve Quilts by developing three design alternatives, and then compare the best of these alternatives to better-known node-link and matrix depictions. A primary weakness in Quilts is their depiction of skip links, links that do not simply connect to a succeeding layer. Therefore in our first study, we compare Quilts using color-only, text-only, and mixed (color and text) skip link depictions, finding that path finding with the color-only depiction is significantly slower and less accurate, and that in certain cases, the mixed depiction offers an advantage over the text-only depiction. In our second study, we compare Quilts using the mixed depiction to node-link diagrams and centered matrices. Overall results show that users can find paths through graphs significantly faster with Quilts (46.6 secs) than with node-link (58.3 secs) or matrix (71.2 secs) diagrams. This speed advantage is still greater in large graphs (e.g. in 200 node graphs, 55.4 secs vs. 71.1 secs for node-link and 84.2 secs for matrix depictions). © 2011 IEEE
Exact sampling of graphs with prescribed degree correlations
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Del Genio, Charo I.; Erdős, Péter L.; Miklós, István; Toroczkai, Zoltán
2015-08-01
Many real-world networks exhibit correlations between the node degrees. For instance, in social networks nodes tend to connect to nodes of similar degree and conversely, in biological and technological networks, high-degree nodes tend to be linked with low-degree nodes. Degree correlations also affect the dynamics of processes supported by a network structure, such as the spread of opinions or epidemics. The proper modelling of these systems, i.e., without uncontrolled biases, requires the sampling of networks with a specified set of constraints. We present a solution to the sampling problem when the constraints imposed are the degree correlations. In particular, we develop an exact method to construct and sample graphs with a specified joint-degree matrix, which is a matrix providing the number of edges between all the sets of nodes of a given degree, for all degrees, thus completely specifying all pairwise degree correlations, and additionally, the degree sequence itself. Our algorithm always produces independent samples without backtracking. The complexity of the graph construction algorithm is {O}({NM}) where N is the number of nodes and M is the number of edges.
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Overlapping community detection based on link graph using distance dynamics
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Jing; Cai, Li-Jun
2018-01-01
The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.
Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings.
Krenn, Mario; Gu, Xuemei; Zeilinger, Anton
2017-12-15
We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory-such as Hall's marriage problem-are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).
Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings
NASA Astrophysics Data System (ADS)
Krenn, Mario; Gu, Xuemei; Zeilinger, Anton
2017-12-01
We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory—such as Hall's marriage problem—are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).
Efficient quantum walk on a quantum processor
Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.
2016-01-01
The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Salem, Saeed; Ozcaglar, Cagri
2014-01-01
Advances in genomic technologies have enabled the accumulation of vast amount of genomic data, including gene expression data for multiple species under various biological and environmental conditions. Integration of these gene expression datasets is a promising strategy to alleviate the challenges of protein functional annotation and biological module discovery based on a single gene expression data, which suffers from spurious coexpression. We propose a joint mining algorithm that constructs a weighted hybrid similarity graph whose nodes are the coexpression links. The weight of an edge between two coexpression links in this hybrid graph is a linear combination of the topological similarities and co-appearance similarities of the corresponding two coexpression links. Clustering the weighted hybrid similarity graph yields recurrent coexpression link clusters (modules). Experimental results on Human gene expression datasets show that the reported modules are functionally homogeneous as evident by their enrichment with biological process GO terms and KEGG pathways.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
2014-01-01
Background Advances in genomic technologies have enabled the accumulation of vast amount of genomic data, including gene expression data for multiple species under various biological and environmental conditions. Integration of these gene expression datasets is a promising strategy to alleviate the challenges of protein functional annotation and biological module discovery based on a single gene expression data, which suffers from spurious coexpression. Results We propose a joint mining algorithm that constructs a weighted hybrid similarity graph whose nodes are the coexpression links. The weight of an edge between two coexpression links in this hybrid graph is a linear combination of the topological similarities and co-appearance similarities of the corresponding two coexpression links. Clustering the weighted hybrid similarity graph yields recurrent coexpression link clusters (modules). Experimental results on Human gene expression datasets show that the reported modules are functionally homogeneous as evident by their enrichment with biological process GO terms and KEGG pathways. PMID:25221624
Sampled-data consensus in switching networks of integrators based on edge events
NASA Astrophysics Data System (ADS)
Xiao, Feng; Meng, Xiangyu; Chen, Tongwen
2015-02-01
This paper investigates the event-driven sampled-data consensus in switching networks of multiple integrators and studies both the bidirectional interaction and leader-following passive reaction topologies in a unified framework. In these topologies, each information link is modelled by an edge of the information graph and assigned a sequence of edge events, which activate the mutual data sampling and controller updates of the two linked agents. Two kinds of edge-event-detecting rules are proposed for the general asynchronous data-sampling case and the synchronous periodic event-detecting case. They are implemented in a distributed fashion, and their effectiveness in reducing communication costs and solving consensus problems under a jointly connected topology condition is shown by both theoretical analysis and simulation examples.
Building Scalable Knowledge Graphs for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick; Zhang, Jia; Duan, Xiaoyi; Miller, J. J.; Bugbee, Kaylin; Christopher, Sundar; Freitag, Brian
2017-01-01
Knowledge Graphs link key entities in a specific domain with other entities via relationships. From these relationships, researchers can query knowledge graphs for probabilistic recommendations to infer new knowledge. Scientific papers are an untapped resource which knowledge graphs could leverage to accelerate research discovery. Goal: Develop an end-to-end (semi) automated methodology for constructing Knowledge Graphs for Earth Science.
Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.
Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less
Offdiagonal complexity: A computationally quick complexity measure for graphs and networks
NASA Astrophysics Data System (ADS)
Claussen, Jens Christian
2007-02-01
A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.
From Many Records to One Graph: Heterogeneity Conflicts in the Linked Data Restructuring Cycle
ERIC Educational Resources Information Center
Tallerås, Kim
2013-01-01
Introduction: During the last couple of years the library community has developed a number of comprehensive metadata standardization projects inspired by the idea of linked data, such as the BIBFRAME model. Linked data is a set of best practice principles of publishing and exposing data on the Web utilizing a graph based data model powered with…
Dynamic graph system for a semantic database
Mizell, David
2016-04-12
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
Dynamic graph system for a semantic database
Mizell, David
2015-01-27
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
A Research Graph dataset for connecting research data repositories using RD-Switchboard.
Aryani, Amir; Poblet, Marta; Unsworth, Kathryn; Wang, Jingbo; Evans, Ben; Devaraju, Anusuriya; Hausstein, Brigitte; Klas, Claus-Peter; Zapilko, Benjamin; Kaplun, Samuele
2018-05-29
This paper describes the open access graph dataset that shows the connections between Dryad, CERN, ANDS and other international data repositories to publications and grants across multiple research data infrastructures. The graph dataset was created using the Research Graph data model and the Research Data Switchboard (RD-Switchboard), a collaborative project by the Research Data Alliance DDRI Working Group (DDRI WG) with the aim to discover and connect the related research datasets based on publication co-authorship or jointly funded grants. The graph dataset allows researchers to trace and follow the paths to understanding a body of work. By mapping the links between research datasets and related resources, the graph dataset improves both their discovery and visibility, while avoiding duplicate efforts in data creation. Ultimately, the linked datasets may spur novel ideas, facilitate reproducibility and re-use in new applications, stimulate combinatorial creativity, and foster collaborations across institutions.
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
Linking of the BENSON graph-plotter with the Elektronika-1001 computer
NASA Technical Reports Server (NTRS)
Valtts, I. Y.; Nilolaev, N. Y.; Popov, M. V.; Soglasnov, V. A.
1980-01-01
A device, developed by the Institute of Space Research of the Academy of Sciences of the USSR, for linking the Elektronika-100I computer with the BENSON graph-plotter is described. Programs are compiled which provide display of graphic and alphanumeric information. Instructions for their utilization are given.
Query optimization for graph analytics on linked data using SPARQL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Seokyong; Lee, Sangkeun; Lim, Seung -Hwan
2015-07-01
Triplestores that support query languages such as SPARQL are emerging as the preferred and scalable solution to represent data and meta-data as massive heterogeneous graphs using Semantic Web standards. With increasing adoption, the desire to conduct graph-theoretic mining and exploratory analysis has also increased. Addressing that desire, this paper presents a solution that is the marriage of Graph Theory and the Semantic Web. We present software that can analyze Linked Data using graph operations such as counting triangles, finding eccentricity, testing connectedness, and computing PageRank directly on triple stores via the SPARQL interface. We describe the process of optimizing performancemore » of the SPARQL-based implementation of such popular graph algorithms by reducing the space-overhead, simplifying iterative complexity and removing redundant computations by understanding query plans. Our optimized approach shows significant performance gains on triplestores hosted on stand-alone workstations as well as hardware-optimized scalable supercomputers such as the Cray XMT.« less
A Development Testbed for ALPS-Based Systems
1988-10-01
alloted to tile application because of size or power constraints). Given an underlying support ALPS architecture such as the d-ALPS architecture, a...resource on which it is assigned at runtime. A second representation problem is that most graph analysis algorithms treat either graphs with weighted links...subtask) associated with it but is treated like other links. In d-ALPS, as a priority precedence link, it would cause the binding of a pro- cessor: as a
Efficient dynamic graph construction for inductive semi-supervised learning.
Dornaika, F; Dahbi, R; Bosaghzadeh, A; Ruichek, Y
2017-10-01
Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph. As a case study, we use the recently proposed Two Phase Weighted Regularized Least Square (TPWRLS) graph construction method. The paper has two main contributions. First, we use the TPWRLS coding scheme to represent new sample(s) with respect to an existing database. The representative coefficients are then used to update the graph affinity matrix. The proposed method not only appends the new samples to the graph but also updates the whole graph structure by discovering which nodes are affected by the introduction of new samples and by updating their edge weights. The second contribution of the article is the application of the proposed framework to the problem of graph-based label propagation using multiple observations for vision-based recognition tasks. Experiments on several image databases show that, without any significant loss in the accuracy of the final classification, the proposed dynamic graph construction is more efficient than the batch graph construction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Limits on relief through constrained exchange on random graphs
NASA Astrophysics Data System (ADS)
LaViolette, Randall A.; Ellebracht, Lory A.; Gieseler, Charles J.
2007-09-01
Agents are represented by nodes on a random graph (e.g., “small world”). Each agent is endowed with a zero-mean random value that may be either positive or negative. All agents attempt to find relief, i.e., to reduce the magnitude of that initial value, to zero if possible, through exchanges. The exchange occurs only between the agents that are linked, a constraint that turns out to dominate the results. The exchange process continues until Pareto equilibrium is achieved. Only 40-90% of the agents achieved relief on small-world graphs with mean degree between 2 and 40. Even fewer agents achieved relief on scale-free-like graphs with a truncated power-law degree distribution. The rate at which relief grew with increasing degree was slow, only at most logarithmic for all of the graphs considered; viewed in reverse, the fraction of nodes that achieve relief is resilient to the removal of links.
Crichton, Gamal; Guo, Yufan; Pyysalo, Sampo; Korhonen, Anna
2018-05-21
Link prediction in biomedical graphs has several important applications including predicting Drug-Target Interactions (DTI), Protein-Protein Interaction (PPI) prediction and Literature-Based Discovery (LBD). It can be done using a classifier to output the probability of link formation between nodes. Recently several works have used neural networks to create node representations which allow rich inputs to neural classifiers. Preliminary works were done on this and report promising results. However they did not use realistic settings like time-slicing, evaluate performances with comprehensive metrics or explain when or why neural network methods outperform. We investigated how inputs from four node representation algorithms affect performance of a neural link predictor on random- and time-sliced biomedical graphs of real-world sizes (∼ 6 million edges) containing information relevant to DTI, PPI and LBD. We compared the performance of the neural link predictor to those of established baselines and report performance across five metrics. In random- and time-sliced experiments when the neural network methods were able to learn good node representations and there was a negligible amount of disconnected nodes, those approaches outperformed the baselines. In the smallest graph (∼ 15,000 edges) and in larger graphs with approximately 14% disconnected nodes, baselines such as Common Neighbours proved a justifiable choice for link prediction. At low recall levels (∼ 0.3) the approaches were mostly equal, but at higher recall levels across all nodes and average performance at individual nodes, neural network approaches were superior. Analysis showed that neural network methods performed well on links between nodes with no previous common neighbours; potentially the most interesting links. Additionally, while neural network methods benefit from large amounts of data, they require considerable amounts of computational resources to utilise them. Our results indicate that when there is enough data for the neural network methods to use and there are a negligible amount of disconnected nodes, those approaches outperform the baselines. At low recall levels the approaches are mostly equal but at higher recall levels and average performance at individual nodes, neural network approaches are superior. Performance at nodes without common neighbours which indicate more unexpected and perhaps more useful links account for this.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Supriya
Surface transportation road networks share structural properties similar to other complex networks (e.g., social networks, information networks, biological networks, and so on). This research investigates the structural properties of road networks for any possible correlation with the traffic characteristics such as link flows those determined independently. Additionally, we define a criticality index for the links of the road network that identifies the relative importance in the network. We tested our hypotheses with two sample road networks. Results show that, correlation exists between the link flows and centrality measures of a link of the road (dual graph approach is followed) andmore » the criticality index is found to be effective for one test network to identify the vulnerable nodes.« less
Exploring and Making Sense of Large Graphs
2015-08-01
and bold) are n × n ; vectors (lower-case bold) are n × 1 column vectors, and scalars (in lower-case plain font) typically correspond to strength of...graph is often denoted as |V| or n . Edges or Links: A finite set E of lines between objects in a graph. The edges represent relationships between the...Adjacency matrix of a simple, unweighted and undirected graph. Adjacency matrix: The adjacency matrix of a graph G is an n × n matrix A, whose element aij
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
Graph Structure in Three National Academic Webs: Power Laws with Anomalies.
ERIC Educational Resources Information Center
Thelwall, Mike; Wilkinson, David
2003-01-01
Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Adam
2007-05-22
MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.
A new measure based on degree distribution that links information theory and network graph analysis
2012-01-01
Background Detailed connection maps of human and nonhuman brains are being generated with new technologies, and graph metrics have been instrumental in understanding the general organizational features of these structures. Neural networks appear to have small world properties: they have clustered regions, while maintaining integrative features such as short average pathlengths. Results We captured the structural characteristics of clustered networks with short average pathlengths through our own variable, System Difference (SD), which is computationally simple and calculable for larger graph systems. SD is a Jaccardian measure generated by averaging all of the differences in the connection patterns between any two nodes of a system. We calculated SD over large random samples of matrices and found that high SD matrices have a low average pathlength and a larger number of clustered structures. SD is a measure of degree distribution with high SD matrices maximizing entropic properties. Phi (Φ), an information theory metric that assesses a system’s capacity to integrate information, correlated well with SD - with SD explaining over 90% of the variance in systems above 11 nodes (tested for 4 to 13 nodes). However, newer versions of Φ do not correlate well with the SD metric. Conclusions The new network measure, SD, provides a link between high entropic structures and degree distributions as related to small world properties. PMID:22726594
The complex network of the Brazilian Popular Music
NASA Astrophysics Data System (ADS)
de Lima e Silva, D.; Medeiros Soares, M.; Henriques, M. V. C.; Schivani Alves, M. T.; de Aguiar, S. G.; de Carvalho, T. P.; Corso, G.; Lucena, L. S.
2004-02-01
We study the Brazilian Popular Music in a network perspective. We call the Brazilian Popular Music Network, BPMN, the graph where the vertices are the song writers and the links are determined by the existence of at least a common singer. The linking degree distribution of such graph shows power law and exponential regions. The exponent of the power law is compatible with the values obtained by the evolving network algorithms seen in the literature. The average path length of the BPMN is similar to the correspondent random graph, its clustering coefficient, however, is significantly larger. These results indicate that the BPMN forms a small-world network.
A Visual Evaluation Study of Graph Sampling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Wong, Pak C.
2017-01-29
We evaluate a dozen prevailing graph-sampling techniques with an ultimate goal to better visualize and understand big and complex graphs that exhibit different properties and structures. The evaluation uses eight benchmark datasets with four different graph types collected from Stanford Network Analysis Platform and NetworkX to give a comprehensive comparison of various types of graphs. The study provides a practical guideline for visualizing big graphs of different sizes and structures. The paper discusses results and important observations from the study.
KinLinks: Software Toolkit for Kinship Analysis and Pedigree Generation from NGS Datasets
2015-04-21
Retinitis pigmentosa families 2110 and 2111 of 52 individuals across 6 generations (Figure 5a), and 54 geographically diverse samples (Supplementary Table...relationships within the Retinitis pigmentosa family. Machine Learning Classifier for pairwise kinship prediction Ten features were identified for training...family (Figure 4b), and the Retinitis pigmentosa family (Figure 5b). The auto-generated pedigrees were graphed as well as in family-tree format using
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
He, Chenlong; Feng, Zuren; Ren, Zhigang
2018-01-01
In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.
Feng, Zuren; Ren, Zhigang
2018-01-01
In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared. PMID:29462217
Interacting particle systems on graphs
NASA Astrophysics Data System (ADS)
Sood, Vishal
In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations, while for small populations the dynamics are similar to the neutral case. The likelihood for the fitter mutants to drive the resident genotype to extinction is calculated.
OPEX: Optimized Eccentricity Computation in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, Keith
2011-11-14
Real-world graphs have many properties of interest, but often these properties are expensive to compute. We focus on eccentricity, radius and diameter in this work. These properties are useful measures of the global connectivity patterns in a graph. Unfortunately, computing eccentricity for all nodes is O(n2) for a graph with n nodes. We present OPEX, a novel combination of optimizations which improves computation time of these properties by orders of magnitude in real-world experiments on graphs of many different sizes. We run OPEX on graphs with up to millions of links. OPEX gives either exact results or bounded approximations, unlikemore » its competitors which give probabilistic approximations or sacrifice node-level information (eccentricity) to compute graphlevel information (diameter).« less
Fischer, Helen; Schütte, Stefanie; Depoux, Anneliese; Amelung, Dorothee; Sauerborn, Rainer
2018-04-27
Graphs are prevalent in the reports of the Intergovernmental Panel on Climate Change (IPCC), often depicting key points and major results. However, the popularity of graphs in the IPCC reports contrasts with a neglect of empirical tests of their understandability. Here we put the understandability of three graphs taken from the Health chapter of the Fifth Assessment Report to an empirical test. We present a pilot study where we evaluate objective understanding (mean accuracy in multiple-choice questions) and subjective understanding (self-assessed confidence in accuracy) in a sample of attendees of the United Nations Climate Change Conference in Marrakesh, 2016 (COP22), and a student sample. Results show a mean objective understanding of M = 0.33 for the COP sample, and M = 0.38 for the student sample. Subjective and objective understanding were unrelated for the COP22 sample, but associated for the student sample. These results suggest that (i) understandability of the IPCC health chapter graphs is insufficient, and that (ii) particularly COP22 attendees lacked insight into which graphs they did, and which they did not understand. Implications for the construction of graphs to communicate health impacts of climate change to decision-makers are discussed.
Literature Search through Mixed-Membership Community Discovery
NASA Astrophysics Data System (ADS)
Eliassi-Rad, Tina; Henderson, Keith
We introduce a new approach to literature search that is based on finding mixed-membership communities on an augmented co-authorship graph (ACA) with a scalable generative model. An ACA graph contains two types of edges: (1) coauthorship links and (2) links between researchers with substantial expertise overlap. Our solution eliminates the biases introduced by either looking at citations of a paper or doing a Web search. A case study on PubMed shows the benefits of our approach.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
Growth and structure of the World Wide Web: Towards realistic modeling
NASA Astrophysics Data System (ADS)
Tadić, Bosiljka
2002-08-01
We simulate evolution of the World Wide Web from the dynamic rules incorporating growth, bias attachment, and rewiring. We show that the emergent double-hierarchical structure with distinct distributions of out- and in-links is comparable with the observed empirical data when the control parameter (average graph flexibility β) is kept in the range β=3-4. We then explore the Web graph by simulating (a) Web crawling to determine size and depth of connected components, and (b) a random walker that discovers the structure of connected subgraphs with dominant attractor and promoter nodes. A random walker that adapts its move strategy to mimic local node linking preferences is shown to have a short access time to "important" nodes on the Web graph.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2016-01-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width—regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers. PMID:27279724
Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...
2017-06-01
Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less
Visualizing Internet routing changes.
Lad, Mohit; Massey, Dan; Zhang, Lixia
2006-01-01
Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes.
Information extraction and knowledge graph construction from geoscience literature
NASA Astrophysics Data System (ADS)
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen
2018-03-01
Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirdt, J.A.; Brown, D.A., E-mail: dbrown@bnl.gov
The EXFOR library contains the largest collection of experimental nuclear reaction data available as well as the data's bibliographic information and experimental details. We text-mined the REACTION and MONITOR fields of the ENTRYs in the EXFOR library in order to identify understudied reactions and quantities. Using the results of the text-mining, we created an undirected graph from the EXFOR datasets with each graph node representing a single reaction and quantity and graph links representing the various types of connections between these reactions and quantities. This graph is an abstract representation of the connections in EXFOR, similar to graphs of socialmore » networks, authorship networks, etc. We use various graph theoretical tools to identify important yet understudied reactions and quantities in EXFOR. Although we identified a few cross sections relevant for shielding applications and isotope production, mostly we identified charged particle fluence monitor cross sections. As a side effect of this work, we learn that our abstract graph is typical of other real-world graphs.« less
NASA Astrophysics Data System (ADS)
Hirdt, J. A.; Brown, D. A.
2016-01-01
The EXFOR library contains the largest collection of experimental nuclear reaction data available as well as the data's bibliographic information and experimental details. We text-mined the REACTION and MONITOR fields of the ENTRYs in the EXFOR library in order to identify understudied reactions and quantities. Using the results of the text-mining, we created an undirected graph from the EXFOR datasets with each graph node representing a single reaction and quantity and graph links representing the various types of connections between these reactions and quantities. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. We use various graph theoretical tools to identify important yet understudied reactions and quantities in EXFOR. Although we identified a few cross sections relevant for shielding applications and isotope production, mostly we identified charged particle fluence monitor cross sections. As a side effect of this work, we learn that our abstract graph is typical of other real-world graphs.
Ringo: Interactive Graph Analytics on Big-Memory Machines
Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure
2016-01-01
We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215
Ringo: Interactive Graph Analytics on Big-Memory Machines.
Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure
2015-01-01
We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Measuring Graph Comprehension, Critique, and Construction in Science
NASA Astrophysics Data System (ADS)
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-08-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed items to measure graph comprehension, critique, and construction and developed scoring rubrics based on the knowledge integration (KI) framework. We administered the items to over 460 middle school students. We found that the items formed a coherent scale and had good reliability using both item response theory and classical test theory. The KI scoring rubric showed that most students had difficulty linking graphs features to science concepts, especially when asked to critique or construct graphs. In addition, students with limited access to computers as well as those who speak a language other than English at home have less integrated understanding than others. These findings point to the need to increase the integration of graphing into science instruction. The results suggest directions for further research leading to comprehensive assessments of graph understanding.
Use of graph theory measures to identify errors in record linkage.
Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B
2014-07-01
Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A nonlinear q-voter model with deadlocks on the Watts-Strogatz graph
NASA Astrophysics Data System (ADS)
Sznajd-Weron, Katarzyna; Michal Suszczynski, Karol
2014-07-01
We study the nonlinear $q$-voter model with deadlocks on a Watts-Strogats graph. Using Monte Carlo simulations, we obtain so called exit probability and exit time. We determine how network properties, such as randomness or density of links influence exit properties of a model.
Scanner Art and Links to Physics
ERIC Educational Resources Information Center
Russell, David
2005-01-01
A photocopier or scanner can be used to produce not only the standard motion graphs of physics, but a variety of other graphs that resemble gravitational and electrical fields. This article presents a starting point for exploring scanner graphics, which brings together investigation in art and design, physics, mathematics, and information…
Simple graph models of information spread in finite populations
Voorhees, Burton; Ryder, Bergerud
2015-01-01
We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661
Weighted link graphs: a distributed IDS for secondary intrusion detection and defense
NASA Astrophysics Data System (ADS)
Zhou, Mian; Lang, Sheau-Dong
2005-03-01
While a firewall installed at the perimeter of a local network provides the first line of defense against the hackers, many intrusion incidents are the results of successful penetration of the firewalls. One computer"s compromise often put the entire network at risk. In this paper, we propose an IDS that provides a finer control over the internal network. The system focuses on the variations of connection-based behavior of each single computer, and uses a weighted link graph to visualize the overall traffic abnormalities. The functionality of our system is of a distributed personal IDS system that also provides a centralized traffic analysis by graphical visualization. We use a novel weight assignment schema for the local detection within each end agent. The local abnormalities are quantitatively carried out by the node weight and link weight and further sent to the central analyzer to build the weighted link graph. Thus, we distribute the burden of traffic processing and visualization to each agent and make it more efficient for the overall intrusion detection. As the LANs are more vulnerable to inside attacks, our system is designed as a reinforcement to prevent corruption from the inside.
A Constant-Factor Approximation Algorithm for the Link Building Problem
NASA Astrophysics Data System (ADS)
Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia
In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.
Huang, Xiaoke; Zhao, Ye; Yang, Jing; Zhang, Chong; Ma, Chao; Ye, Xinyue
2016-01-01
We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.
Diffusion-based recommendation with trust relations on tripartite graphs
NASA Astrophysics Data System (ADS)
Wang, Ximeng; Liu, Yun; Zhang, Guangquan; Xiong, Fei; Lu, Jie
2017-08-01
The diffusion-based recommendation approach is a vital branch in recommender systems, which successfully applies physical dynamics to make recommendations for users on bipartite or tripartite graphs. Trust links indicate users’ social relations and can provide the benefit of reducing data sparsity. However, traditional diffusion-based algorithms only consider rating links when making recommendations. In this paper, the complementarity of users’ implicit and explicit trust is exploited, and a novel resource-allocation strategy is proposed, which integrates these two kinds of trust relations on tripartite graphs. Through empirical studies on three benchmark datasets, our proposed method obtains better performance than most of the benchmark algorithms in terms of accuracy, diversity and novelty. According to the experimental results, our method is an effective and reasonable way to integrate additional features into the diffusion-based recommendation approach.
A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.
Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang
2016-04-01
Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.
Data mining the EXFOR database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David A.; Hirdt, John; Herman, Michal
2013-12-13
The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as this data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the connections of various types between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. Analysing this abstract graph, we are able to address very specific questions such as 1) what observables are being used as reference measurements by the experimental community? 2) are thesemore » observables given the attention needed by various standards organisations? 3) are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross section) observables that should be evaluated and made into reaction reference standards.« less
EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-01-16
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution,more » diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'« less
Label-based routing for a family of small-world Farey graphs.
Zhai, Yinhu; Wang, Yinhe
2016-05-11
We introduce an informative labelling method for vertices in a family of Farey graphs, and deduce a routing algorithm on all the shortest paths between any two vertices in Farey graphs. The label of a vertex is composed of the precise locating position in graphs and the exact time linking to graphs. All the shortest paths routing between any pair of vertices, which number is exactly the product of two Fibonacci numbers, are determined only by their labels, and the time complexity of the algorithm is O(n). It is the first algorithm to figure out all the shortest paths between any pair of vertices in a kind of deterministic graphs. For Farey networks, the existence of an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization and structural controllability) and also to understand the underlying mechanisms that have shaped their particular structure.
Label-based routing for a family of small-world Farey graphs
NASA Astrophysics Data System (ADS)
Zhai, Yinhu; Wang, Yinhe
2016-05-01
We introduce an informative labelling method for vertices in a family of Farey graphs, and deduce a routing algorithm on all the shortest paths between any two vertices in Farey graphs. The label of a vertex is composed of the precise locating position in graphs and the exact time linking to graphs. All the shortest paths routing between any pair of vertices, which number is exactly the product of two Fibonacci numbers, are determined only by their labels, and the time complexity of the algorithm is O(n). It is the first algorithm to figure out all the shortest paths between any pair of vertices in a kind of deterministic graphs. For Farey networks, the existence of an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization and structural controllability) and also to understand the underlying mechanisms that have shaped their particular structure.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
Fisher metric, geometric entanglement, and spin networks
NASA Astrophysics Data System (ADS)
Chirco, Goffredo; Mele, Fabio M.; Oriti, Daniele; Vitale, Patrizia
2018-02-01
Starting from recent results on the geometric formulation of quantum mechanics, we propose a new information geometric characterization of entanglement for spin network states in the context of quantum gravity. For the simple case of a single-link fixed graph (Wilson line), we detail the construction of a Riemannian Fisher metric tensor and a symplectic structure on the graph Hilbert space, showing how these encode the whole information about separability and entanglement. In particular, the Fisher metric defines an entanglement monotone which provides a notion of distance among states in the Hilbert space. In the maximally entangled gauge-invariant case, the entanglement monotone is proportional to a power of the area of the surface dual to the link thus supporting a connection between entanglement and the (simplicial) geometric properties of spin network states. We further extend such analysis to the study of nonlocal correlations between two nonadjacent regions of a generic spin network graph characterized by the bipartite unfolding of an intertwiner state. Our analysis confirms the interpretation of spin network bonds as a result of entanglement and to regard the same spin network graph as an information graph, whose connectivity encodes, both at the local and nonlocal level, the quantum correlations among its parts. This gives a further connection between entanglement and geometry.
Resistance and relatedness on an evolutionary graph
Maciejewski, Wes
2012-01-01
When investigating evolution in structured populations, it is often convenient to consider the population as an evolutionary graph—individuals as nodes, and whom they may act with as edges. There has, in recent years, been a surge of interest in evolutionary graphs, especially in the study of the evolution of social behaviours. An inclusive fitness framework is best suited for this type of study. A central requirement for an inclusive fitness analysis is an expression for the genetic similarity between individuals residing on the graph. This has been a major hindrance for work in this area as highly technical mathematics are often required. Here, I derive a result that links genetic relatedness between haploid individuals on an evolutionary graph to the resistance between vertices on a corresponding electrical network. An example that demonstrates the potential computational advantage of this result over contemporary approaches is provided. This result offers more, however, to the study of population genetics than strictly computationally efficient methods. By establishing a link between gene transfer and electric circuit theory, conceptualizations of the latter can enhance understanding of the former. PMID:21849384
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities
NASA Astrophysics Data System (ADS)
Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir
2016-03-01
Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.
Gambler's ruin problem on Erdős-Rényi graphs
NASA Astrophysics Data System (ADS)
Néda, Zoltán; Davidova, Larissa; Újvári, Szeréna; Istrate, Gabriel
2017-02-01
A multiagent ruin-game is studied on Erdős-Rényi type graphs. Initially the players have the same wealth. At each time step a monopolist game is played on all active links (links that connect nodes with nonzero wealth). In such a game each player puts a unit wealth in the pot and the pot is won with equal probability by one of the players. The game ends when there are no connected players such that both of them have non-zero wealth. In order to characterize the final state for dense graphs a compact formula is given for the expected number of the remaining players with non-zero wealth and the wealth distribution among these players. Theoretical predictions are given for the expected duration of the ruin game. The dynamics of the number of active players is also investigated. Validity of the theoretical predictions is investigated by Monte Carlo experiments.
ERIC Educational Resources Information Center
Kenney, Rachael H.
2014-01-01
This study examined ways in which students make use of a graphing calculator and how use relates to comfort and understanding with mathematical symbols. Analysis involved examining students' words and actions in problem solving to identify evidence of algebraic insight. Findings suggest that some symbols and symbolic structures had strong…
The Effects of Multiple Linked Representations on Student Learning in Mathematics.
ERIC Educational Resources Information Center
Ozgun-Koca, S. Asli
This study investigated the effects on student understanding of linear relationships using the linked representation software VideoPoint as compared to using semi-linked representation software. It investigated students' attitudes towards and preferences for mathematical representations--equations, tables, or graphs. An Algebra I class was divided…
Graphing Online Searches with Lotus 1-2-3.
ERIC Educational Resources Information Center
Persson, Olle
1986-01-01
This article illustrates how Lotus 1-2-3 software can be used to create graphs using downloaded online searches as raw material, notes most commands applied, and outlines three required steps: downloading, importing the downloading file into the worksheet, and making graphs. An example in bibliometrics and sample graphs are included. (EJS)
Label Information Guided Graph Construction for Semi-Supervised Learning.
Zhuang, Liansheng; Zhou, Zihan; Gao, Shenghua; Yin, Jingwen; Lin, Zhouchen; Ma, Yi
2017-09-01
In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.
Analysis of Return and Forward Links from STARS' Flight Demonstration 1
NASA Technical Reports Server (NTRS)
Gering, James A.
2003-01-01
Space-based Telemetry And Range Safety (STARS) is a Kennedy Space Center (KSC) led proof-of-concept demonstration, which utilizes NASA's space network of Tracking and Data Relay Satellites (TDRS) as a pathway for launch and mission related information streams. Flight Demonstration 1 concluded on July 15,2003 with the seventh flight of a Low Power Transmitter (LPT) a Command and Data Handler (C&DH), a twelve channel GPS receiver and associated power supplies and amplifiers. The equipment flew on NASA's F-I5 aircraft at the Dryden Flight Research Center located at Edwards Air Force Base in California. During this NASA-ASEE Faculty Fellowship, the author participated in the collection and analysis of data from the seven flights comprising Flight Demonstration 1. Specifically, the author examined the forward and return links bit energy E(sub B) (in Watt-seconds) divided by the ambient radio frequency noise N(sub 0) (in Watts / Hertz). E(sub b)/N(sub 0) is commonly thought of as a signal-to-noise parameter, which characterizes a particular received radio frequency (RF) link. Outputs from the data analysis include the construction of time lines for all flights, production of graphs of range safety values for all seven flights, histograms of range safety E(sub b)/N(sub 0) values in five dB increments, calculation of associated averages and standard deviations, production of graphs of range user E(sub b)/N(sub 0) values for the all flights, production of graphs of AGC's and E(sub b)/N(sub 0) estimates for flight 1, recorded onboard, transmitted directly to the launch head and transmitted through TDRS. The data and graphs are being used to draw conclusions related to a lower than expected signal strength seen in the range safety return link.
Scale-space measures for graph topology link protein network architecture to function.
Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen
2014-06-15
The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
Zhao, Jian; Glueck, Michael; Breslav, Simon; Chevalier, Fanny; Khan, Azam
2017-01-01
User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. We present annotation graphs, a dynamic graph visualization that enables meta-analysis of data based on user-authored annotations. The annotation graph topology encodes annotation semantics, which describe the content of and relations between data selections, comments, and tags. We present a mixed-initiative approach to graph layout that integrates an analyst's manual manipulations with an automatic method based on similarity inferred from the annotation semantics. Various visual graph layout styles reveal different perspectives on the annotation semantics. Annotation graphs are implemented within C8, a system that supports authoring annotations during exploratory analysis of a dataset. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of visually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.
Self-organizing maps for learning the edit costs in graph matching.
Neuhaus, Michel; Bunke, Horst
2005-06-01
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.
ERIC Educational Resources Information Center
Boote, Stacy K.
2014-01-01
This study examined how 12- and 13-year-old students' mathematics and science background knowledge affected line graph interpretations and how interpretations were affected by graph question levels. A purposive sample of 14 students engaged in think aloud interviews while completing an excerpted Test of Graphing in Science. Data were…
The Role of Microcomputer-Based Laboratories in Learning To Make Graphs of Distance and Velocity.
ERIC Educational Resources Information Center
Brasell, Heather
Two questions about the effects of microcomputer-based laboratory (MBL) activities on graphing skills were addressed in this study: (1) the extent to which activities help students link their concrete experiences with motion with graphic representations of these experiences; and (2) the degree of importance of the real-time aspect of the MBL in…
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
A tool for filtering information in complex systems
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-01-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373
A tool for filtering information in complex systems.
Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N
2005-07-26
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.
A novel time series link prediction method: Learning automata approach
NASA Astrophysics Data System (ADS)
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
A tool for filtering information in complex systems
NASA Astrophysics Data System (ADS)
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-07-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.
Two classes of bipartite networks: nested biological and social systems.
Burgos, Enrique; Ceva, Horacio; Hernández, Laura; Perazzo, R P J; Devoto, Mariano; Medan, Diego
2008-10-01
Bipartite graphs have received some attention in the study of social networks and of biological mutualistic systems. A generalization of a previous model is presented, that evolves the topology of the graph in order to optimally account for a given contact preference rule between the two guilds of the network. As a result, social and biological graphs are classified as belonging to two clearly different classes. Projected graphs, linking the agents of only one guild, are obtained from the original bipartite graph. The corresponding evolution of its statistical properties is also studied. An example of a biological mutualistic network is analyzed in detail, and it is found that the model provides a very good fitting of all the main statistical features. The model also provides a proper qualitative description of the same features observed in social webs, suggesting the possible reasons underlying the difference in the organization of these two kinds of bipartite networks.
ERIC Educational Resources Information Center
Varela Mato, Veronica; Yates, Thomas; Stensel, David; Biddle, Stuart; Clemes, Stacy A.
2017-01-01
This study explored the validity of ActiGraph-determined sedentary time (<50 cpm, <100 cpm, <150 cpm, <200 cpm, <250 cpm) compared with the activPAL in a free-living sample of bus drivers. Twenty-eight participants were recruited between November 2013 and February 2014. Participants wore an activPAL3 and ActiGraph GT3X+ concurrently…
Building Knowledge Graphs for NASA's Earth Science Enterprise
NASA Astrophysics Data System (ADS)
Zhang, J.; Lee, T. J.; Ramachandran, R.; Shi, R.; Bao, Q.; Gatlin, P. N.; Weigel, A. M.; Maskey, M.; Miller, J. J.
2016-12-01
Inspired by Google Knowledge Graph, we have been building a prototype Knowledge Graph for Earth scientists, connecting information and data in NASA's Earth science enterprise. Our primary goal is to advance the state-of-the-art NASA knowledge extraction capability by going beyond traditional catalog search and linking different distributed information (such as data, publications, services, tools and people). This will enable a more efficient pathway to knowledge discovery. While Google Knowledge Graph provides impressive semantic-search and aggregation capabilities, it is limited to search topics for general public. We use the similar knowledge graph approach to semantically link information gathered from a wide variety of sources within the NASA Earth Science enterprise. Our prototype serves as a proof of concept on the viability of building an operational "knowledge base" system for NASA Earth science. Information is pulled from structured sources (such as NASA CMR catalog, GCMD, and Climate and Forecast Conventions) and unstructured sources (such as research papers). Leveraging modern techniques of machine learning, information retrieval, and deep learning, we provide an integrated data mining and information discovery environment to help Earth scientists to use the best data, tools, methodologies, and models available to answer a hypothesis. Our knowledge graph would be able to answer questions like: Which articles discuss topics investigating similar hypotheses? How have these methods been tested for accuracy? Which approaches have been highly cited within the scientific community? What variables were used for this method and what datasets were used to represent them? What processing was necessary to use this data? These questions then lead researchers and citizen scientists to investigate the sources where data can be found, available user guides, information on how the data was acquired, and available tools and models to use with this data. As a proof of concept, we focus on a well-defined domain - Hurricane Science linking research articles and their findings, data, people and tools/services. Modern information retrieval, natural language processing machine learning and deep learning techniques are applied to build the knowledge network.
Conjunctive Conceptual Clustering: A Methodology and Experimentation.
1987-09-01
observing a typical restaurant table on vhich there are such objects as food on a plate, a salad, utensils, salt and pepper, napkins , a ase with flowers, a...colored graph has nodes and inks that match only if they have corre-ponding link-olor and node-color labelg 4w 80 [SEtexture sa lif ba p S M i e d If...LINK LINK LINK LINK LINK 9 0 1 OPENdRECT RECTLOD 1 2 CL 10 0 0 LINK LINK INK LINK LINK ,~ . 0.5. Input file for attribute-based clustering The
Graph regularized nonnegative matrix factorization for temporal link prediction in dynamic networks
NASA Astrophysics Data System (ADS)
Ma, Xiaoke; Sun, Penggang; Wang, Yu
2018-04-01
Many networks derived from society and nature are temporal and incomplete. The temporal link prediction problem in networks is to predict links at time T + 1 based on a given temporal network from time 1 to T, which is essential to important applications. The current algorithms either predict the temporal links by collapsing the dynamic networks or collapsing features derived from each network, which are criticized for ignoring the connection among slices. to overcome the issue, we propose a novel graph regularized nonnegative matrix factorization algorithm (GrNMF) for the temporal link prediction problem without collapsing the dynamic networks. To obtain the feature for each network from 1 to t, GrNMF factorizes the matrix associated with networks by setting the rest networks as regularization, which provides a better way to characterize the topological information of temporal links. Then, the GrNMF algorithm collapses the feature matrices to predict temporal links. Compared with state-of-the-art methods, the proposed algorithm exhibits significantly improved accuracy by avoiding the collapse of temporal networks. Experimental results of a number of artificial and real temporal networks illustrate that the proposed method is not only more accurate but also more robust than state-of-the-art approaches.
Maximum efficiency of state-space models of nanoscale energy conversion devices
NASA Astrophysics Data System (ADS)
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
Unapparent Information Revelation: Text Mining for Counterterrorism
NASA Astrophysics Data System (ADS)
Srihari, Rohini K.
Unapparent information revelation (UIR) is a special case of text mining that focuses on detecting possible links between concepts across multiple text documents by generating an evidence trail explaining the connection. A traditional search involving, for example, two or more person names will attempt to find documents mentioning both these individuals. This research focuses on a different interpretation of such a query: what is the best evidence trail across documents that explains a connection between these individuals? For example, all may be good golfers. A generalization of this task involves query terms representing general concepts (e.g. indictment, foreign policy). Previous approaches to this problem have focused on graph mining involving hyperlinked documents, and link analysis exploiting named entities. A new robust framework is presented, based on (i) generating concept chain graphs, a hybrid content representation, (ii) performing graph matching to select candidate subgraphs, and (iii) subsequently using graphical models to validate hypotheses using ranked evidence trails. We adapt the DUC data set for cross-document summarization to evaluate evidence trails generated by this approach
Distributed MPC based consensus for single-integrator multi-agent systems.
Cheng, Zhaomeng; Fan, Ming-Can; Zhang, Hai-Tao
2015-09-01
This paper addresses model predictive control schemes for consensus in multi-agent systems (MASs) with discrete-time single-integrator dynamics under switching directed interaction graphs. The control horizon is extended to be greater than one which endows the closed-loop system with extra degree of freedom. We derive sufficient conditions on the sampling period and the interaction graph to achieve consensus by using the property of infinite products of stochastic matrices. Consensus can be achieved asymptotically if the sampling period is selected such that the interaction graph among agents has a directed spanning tree jointly. Significantly, if the interaction graph always has a spanning tree, one can select an arbitrary large sampling period to guarantee consensus. Finally, several simulations are conducted to illustrate the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
GOGrapher: A Python library for GO graph representation and analysis.
Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua
2009-07-07
The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve.
A New Graph for Understanding Colors of Mudrocks and Shales.
ERIC Educational Resources Information Center
Myrow, Paul Michael
1990-01-01
Reasons for color in sedimentary rocks are explored. Graphs relating the color of rock and corresponding organic content and oxidation state of iron, and of the temporal evolution of a rock sample, are presented. The development of these graphs is discussed. (CW)
Connections between the Sznajd model with general confidence rules and graph theory
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2012-10-01
The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).
Learning a Health Knowledge Graph from Electronic Medical Records.
Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David
2017-07-20
Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).
Graph partitions and cluster synchronization in networks of oscillators
Schaub, Michael T.; O’Clery, Neave; Billeh, Yazan N.; Delvenne, Jean-Charles; Lambiotte, Renaud; Barahona, Mauricio
2017-01-01
Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators. PMID:27781454
2005-06-01
regarding criminals among many police departments. The criminal graph links suspects, crimes , locations, previous case histories, etc. These linkages...rebroadcast rate is high enough that dying out is not a concern. With the rise of sensor and peer to peer networks characterized by high churn, theory that...document groups (say, science fiction novels and thrillers ), based on the word groups that occur most frequently in them. A user who prefers one
A sampling algorithm for segregation analysis
Tier, Bruce; Henshall, John
2001-01-01
Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631
Phase-Space Detection of Cyber Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez Jimenez, Jarilyn M; Ferber, Aaron E; Prowell, Stacy J
Energy Delivery Systems (EDS) are a network of processes that produce, transfer and distribute energy. EDS are increasingly dependent on networked computing assets, as are many Industrial Control Systems. Consequently, cyber-attacks pose a real and pertinent threat, as evidenced by Stuxnet, Shamoon and Dragonfly. Hence, there is a critical need for novel methods to detect, prevent, and mitigate effects of such attacks. To detect cyber-attacks in EDS, we developed a framework for gathering and analyzing timing data that involves establishing a baseline execution profile and then capturing the effect of perturbations in the state from injecting various malware. The datamore » analysis was based on nonlinear dynamics and graph theory to improve detection of anomalous events in cyber applications. The goal was the extraction of changing dynamics or anomalous activity in the underlying computer system. Takens' theorem in nonlinear dynamics allows reconstruction of topologically invariant, time-delay-embedding states from the computer data in a sufficiently high-dimensional space. The resultant dynamical states were nodes, and the state-to-state transitions were links in a mathematical graph. Alternatively, sequential tabulation of executing instructions provides the nodes with corresponding instruction-to-instruction links. Graph theorems guarantee graph-invariant measures to quantify the dynamical changes in the running applications. Results showed a successful detection of cyber events.« less
High-order graph matching based feature selection for Alzheimer's disease identification.
Liu, Feng; Suk, Heung-Il; Wee, Chong-Yaw; Chen, Huafu; Shen, Dinggang
2013-01-01
One of the main limitations of l1-norm feature selection is that it focuses on estimating the target vector for each sample individually without considering relations with other samples. However, it's believed that the geometrical relation among target vectors in the training set may provide useful information, and it would be natural to expect that the predicted vectors have similar geometric relations as the target vectors. To overcome these limitations, we formulate this as a graph-matching feature selection problem between a predicted graph and a target graph. In the predicted graph a node is represented by predicted vector that may describe regional gray matter volume or cortical thickness features, and in the target graph a node is represented by target vector that include class label and clinical scores. In particular, we devise new regularization terms in sparse representation to impose high-order graph matching between the target vectors and the predicted ones. Finally, the selected regional gray matter volume and cortical thickness features are fused in kernel space for classification. Using the ADNI dataset, we evaluate the effectiveness of the proposed method and obtain the accuracies of 92.17% and 81.57% in AD and MCI classification, respectively.
An automatic graph-based approach for artery/vein classification in retinal images.
Dashtbozorg, Behdad; Mendonça, Ana Maria; Campilho, Aurélio
2014-03-01
The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.
Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function.
Reimann, Michael W; Nolte, Max; Scolamiero, Martina; Turner, Katharine; Perin, Rodrigo; Chindemi, Giuseppe; Dłotko, Paweł; Levi, Ran; Hess, Kathryn; Markram, Henry
2017-01-01
The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.
graphkernels: R and Python packages for graph comparison
Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-01-01
Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902
graphkernels: R and Python packages for graph comparison.
Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-02-01
Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.
GOGrapher: A Python library for GO graph representation and analysis
Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua
2009-01-01
Background The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. Findings An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. Conclusion The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve. PMID:19583843
Men's interpretations of graphical information in a videotape decision aid 1
Pylar, Jan; Wills, Celia E.; Lillie, Janet; Rovner, David R.; Kelly‐Blake, Karen; Holmes‐Rovner, Margaret
2007-01-01
Abstract Objective To examine men's interpretations of graphical information types viewed in a high‐quality, previously tested videotape decision aid (DA). Setting, participants, design A community‐dwelling sample of men >50 years of age (N = 188) balanced by education (college/non‐college) and race (Black/White) were interviewed just following their viewing of a videotape DA. A descriptive study design was used to examine men's interpretations of a representative sample of the types of graphs that were shown in the benign prostatic hyperplasia videotape DA. Main variables studied Men provided their interpretation of graphs information presented in three formats that varied in complexity: pictograph, line and horizontal bar graph. Audiotape transcripts of men's responses were coded for meaning and content‐related interpretation statements. Results Men provided both meaning and content‐focused interpretations of the graphs. Accuracy of interpretation was lower than hypothesized on the basis of literature review (85.4% for pictograph, 65.7% for line graph, 47.8% for horizontal bar graph). Accuracy for pictograph and line graphs was associated with education level, = 3.94, P = 0.047, and = 7.55, P = 0.006, respectively. Accuracy was uncorrelated with men's reported liking of the graphs, = 2.00, P = 0.441. Conclusion While men generally liked the DA, accuracy of graphs interpretation was associated with format complexity and education level. Graphs are often recommended to improve comprehension of information in DAs. However, additional evaluation is needed in experimental and naturalistic observational settings to develop best practice standards for data representation. PMID:17524011
Publishing Data on Physical Samples Using the GeoLink Ontology and Linked Data Platforms
NASA Astrophysics Data System (ADS)
Ji, P.; Arko, R. A.; Lehnert, K. A.; Song, L.; Carter, M. R.; Hsu, L.
2015-12-01
Interdisciplinary Earth Data Alliance (IEDA), one of partners in EarthCube GeoLink project, seeks to explore the extent to which the use of GeoLink reusable Ontology Design Patterns (ODPs) and linked data platforms in IEDA data infrastructure can make research data more easily accessible and valuable. Linked data for the System for Earth Sample Registration (SESAR) is the first effort of IEDA to show how linked data enhance the presentation of IEDA data system architecture. SESAR Linked Data maps each table and column in SESAR database to RDF class and property based on GeoLink view, which build on the top of GeoLink ODPs. Then, uses D2RQ dumping the contents of SESAR database into RDF triples on the basis of mapping results. And, the dumped RDF triples is loaded into GRAPHDB, an RDF graph database, as permanent data in the form of atomic facts expressed as subjects, predicates and objects which provide support for semantic interoperability between IEDA and other GeoLink partners. Finally, an integrated browsing and searching interface build on Callimachus, a highly scalable platform for publishing linked data, is introduced to make sense of data stored in triplestore. Drill down and through features are built in the interface to help users locating content efficiently. The drill down feature enables users to explore beyond the summary information in the instance list of a specific class and into the detail from the specific instance page. The drill through feature enables users to jump from one instance to another one by simply clicking the link of the latter nested in the former region. Additionally, OpenLayers map is embedded into the interface to enhance the attractiveness of the presentation of instance which has geospatial information. Furthermore, by linking instances in the SESAR datasets to matching or corresponding instances in external sets, the presentation has been enriched with additional information about related classes like person, cruise, etc.
Scale-free characteristics of random networks: the topology of the world-wide web
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Albert, Réka; Jeong, Hawoong
2000-06-01
The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.
Building a SuAVE browse interface to R2R's Linked Data
NASA Astrophysics Data System (ADS)
Clark, D.; Stocks, K. I.; Arko, R. A.; Zaslavsky, I.; Whitenack, T.
2017-12-01
The Rolling Deck to Repository program (R2R) is creating and evaluating a new browse portal based on the SuAVE platform and the R2R linked data graph. R2R manages the underway sensor data collected by the fleet of US academic research vessels, and provides a discovery and access point to those data at its website, www.rvdata.us. R2R has a database-driven search interface, but seeks a more capable and extensible browse interface that could be built off of the substantial R2R linked data resources. R2R's Linked Data graph organizes its data holdings around key concepts (e.g. cruise, vessel, device type, operator, award, organization, publication), anchored by persistent identifiers where feasible. The "Survey Analysis via Visual Exploration" or SuAVE platform (suave.sdsc.edu) is a system for online publication, sharing, and analysis of images and metadata. It has been implemented as an interface to diverse data collections, but has not been driven off of linked data in the past. SuAVE supports several features of interest to R2R, including faceted searching, collaborative annotations, efficient subsetting, Google maps-like navigation over an image gallery, and several types of data analysis. Our initial SuAVE-based implementation was through a CSV export from the R2R PostGIS-enabled PostgreSQL database. This served to demonstrate the utility of SuAVE but was static and required reloading as R2R data holdings grew. We are now working to implement a SPARQL-based ("RDF Query Language") service that directly leverages the R2R Linked Data graph and offers the ability to subset and/or customize output.We will show examples of SuAVE faceted searches on R2R linked data concepts, and discuss our experience to date with this work in progress.
A linked GeoData map for enabling information access
Powell, Logan J.; Varanka, Dalia E.
2018-01-10
OverviewThe Geospatial Semantic Web (GSW) is an emerging technology that uses the Internet for more effective knowledge engineering and information extraction. Among the aims of the GSW are to structure the semantic specifications of data to reduce ambiguity and to link those data more efficiently. The data are stored as triples, the basic data unit in graph databases, which are similar to the vector data model of geographic information systems (GIS); that is, a node-edge-node model that forms a graph of semantically related information. The GSW is supported by emerging technologies such as linked geospatial data, described below, that enable it to store and manage geographical data that require new cartographic methods for visualization. This report describes a map that can interact with linked geospatial data using a simulation of a data query approach called the browsable graph to find information that is semantically related to a subject of interest, visualized using the Data Driven Documents (D3) library. Such a semantically enabled map functions as a map knowledge base (MKB) (Varanka and Usery, 2017).A MKB differs from a database in an important way. The central element of a triple, alternatively called the edge or property, is composed of a logic formalization that structures the relation between the first and third parts, the nodes or objects. Node-edge-node represents the graphic form of the triple, and the subject-property-object terms represent the data structure. Object classes connect to build a federated graph, similar to a network in visual form. Because the triple property is a logical statement (a predicate), the data graph represents logical propositions or assertions accepted to be true about the subject matter. These logical formalizations can be manipulated to calculate new triples, representing inferred logical assertions, from the existing data.To demonstrate a MKB system, a technical proof-of-concept is developed that uses geographically attributed Resource Description Framework (RDF) serializations of linked data for mapping. The proof-of-concept focuses on accessing triple data from visual elements of a geographic map as the interface to the MKB. The map interface is embedded with other essential functions such as SPARQL Protocol and RDF Query Language (SPARQL) data query endpoint services and reasoning capabilities of Apache Marmotta (Apache Software Foundation, 2017). An RDF database of the Geographic Names Information System (GNIS), which contains official names of domestic feature in the United States, was linked to a county data layer from The National Map of the U.S. Geological Survey. The county data are part of a broader Government Units theme offered to the public as Esri shapefiles. The shapefile used to draw the map itself was converted to a geographic-oriented JavaScript Object Notation (JSON) (GeoJSON) format and linked through various properties with a linked geodata version of the GNIS database called “GNIS–LD” (Butler and others, 2016; B. Regalia and others, University of California-Santa Barbara, written commun., 2017). The GNIS–LD files originated in Terse RDF Triple Language (Turtle) format but were converted to a JSON format specialized in linked data, “JSON–LD” (Beckett and Berners-Lee, 2011; Sorny and others, 2014). The GNIS–LD database is composed of roughly three predominant triple data graphs: Features, Names, and History. The graphs include a set of namespace prefixes used by each of the attributes. Predefining the prefixes made the conversion to the JSON–LD format simple to complete because Turtle and JSON–LD are variant specifications of the basic RDF concept.To convert a shapefile into GeoJSON format to capture the geospatial coordinate geometry objects, an online converter, Mapshaper, was used (Bloch, 2013). To convert the Turtle files, a custom converter written in Java reconstructs the files by parsing each grouping of attributes belonging to one subject and pasting the data into a new file that follows the syntax of JSON–LD. Additionally, the Features file contained its own set of geometries, which was exported into a separate JSON–LD file along with its elevation value to form a fourth file, named “features-geo.json.” Extracted data from external files can be represented in HyperText Markup Language (HTML) path objects. The goal was to import multiple JSON–LD files using this approach.
Graph Theoretic Foundations of Multibody Dynamics Part I: Structural Properties
Jain, Abhinandan
2011-01-01
This is the first part of two papers that use concepts from graph theory to obtain a deeper understanding of the mathematical foundations of multibody dynamics. The key contribution is the development of a unifying framework that shows that key analytical results and computational algorithms in multibody dynamics are a direct consequence of structural properties and require minimal assumptions about the specific nature of the underlying multibody system. This first part focuses on identifying the abstract graph theoretic structural properties of spatial operator techniques in multibody dynamics. The second part paper exploits these structural properties to develop a broad spectrum of analytical results and computational algorithms. Towards this, we begin with the notion of graph adjacency matrices and generalize it to define block-weighted adjacency (BWA) matrices and their 1-resolvents. Previously developed spatial operators are shown to be special cases of such BWA matrices and their 1-resolvents. These properties are shown to hold broadly for serial and tree topology multibody systems. Specializations of the BWA and 1-resolvent matrices are referred to as spatial kernel operators (SKO) and spatial propagation operators (SPO). These operators and their special properties provide the foundation for the analytical and algorithmic techniques developed in the companion paper. We also use the graph theory concepts to study the topology induced sparsity structure of these operators and the system mass matrix. Similarity transformations of these operators are also studied. While the detailed development is done for the case of rigid-link multibody systems, the extension of these techniques to a broader class of systems (e.g. deformable links) are illustrated. PMID:22102790
Saund, Eric
2013-10-01
Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.
Customized Corneal Cross-Linking-A Mathematical Model.
Caruso, Ciro; Epstein, Robert L; Ostacolo, Carmine; Pacente, Luigi; Troisi, Salvatore; Barbaro, Gaetano
2017-05-01
To improve the safety, reproducibility, and depth of effect of corneal cross-linking with the ultraviolet A (UV-A) exposure time and fluence customized according to the corneal thickness. Twelve human corneas were used for the experimental protocol. They were soaked using a transepithelial (EPI-ON) technique using riboflavin with the permeation enhancer vitamin E-tocopheryl polyethylene glycol succinate. The corneas were then placed on microscope slides and irradiated at 3 mW/cm for 30 minutes. The UV-A output parameters were measured to build a new equation describing the time-dependent loss of endothelial protection induced by riboflavin during cross-linking, as well as a pachymetry-dependent and exposure time-dependent prescription for input UV-A fluence. The proposed equation was used to establish graphs prescribing the maximum UV-A fluence input versus exposure time that always maintains corneal endothelium exposure below toxicity limits. Analysis modifying the Lambert-Beer law for riboflavin oxidation leads to graphs of the maximum safe level of UV-A radiation fluence versus the time applied and thickness of the treated cornea. These graphs prescribe UV-A fluence levels below 1.8 mW/cm for corneas of thickness 540 μm down to 1.2 mW/cm for corneas of thickness 350 μm. Irradiation times are typically below 15 minutes. The experimental and mathematical analyses establish the basis for graphs that prescribe maximum safe fluence and UV-A exposure time for corneas of different thicknesses. Because this clinically tested protocol specifies a corneal surface clear of shielding riboflavin on the corneal surface during UV-A irradiation, it allows for shorter UV-A irradiation time and lower fluence than in the Dresden protocol.
Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information
NASA Astrophysics Data System (ADS)
Jamshidpour, N.; Homayouni, S.; Safari, A.
2017-09-01
Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.
Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies
NASA Technical Reports Server (NTRS)
McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.
2010-01-01
This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.
Mathematical modeling of the malignancy of cancer using graph evolution.
Gunduz-Demir, Cigdem
2007-10-01
We report a novel computational method based on graph evolution process to model the malignancy of brain cancer called glioma. In this work, we analyze the phases that a graph passes through during its evolution and demonstrate strong relation between the malignancy of cancer and the phase of its graph. From the photomicrographs of tissues, which are diagnosed as normal, low-grade cancerous and high-grade cancerous, we construct cell-graphs based on the locations of cells; we probabilistically generate an edge between every pair of cells depending on the Euclidean distance between them. For a cell-graph, we extract connectivity information including the properties of its connected components in order to analyze the phase of the cell-graph. Working with brain tissue samples surgically removed from 12 patients, we demonstrate that cell-graphs generated for different tissue types evolve differently and that they exhibit different phase properties, which distinguish a tissue type from another.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure
NASA Astrophysics Data System (ADS)
Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure.
Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Time series analysis of the developed financial markets' integration using visibility graphs
NASA Astrophysics Data System (ADS)
Zhuang, Enyu; Small, Michael; Feng, Gang
2014-09-01
A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.
Large-scale quantum networks based on graphs
NASA Astrophysics Data System (ADS)
Epping, Michael; Kampermann, Hermann; Bruß, Dagmar
2016-05-01
Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.
Proximity Networks and Epidemics
NASA Astrophysics Data System (ADS)
Guclu, Hasan; Toroczkai, Zoltán
2007-03-01
We presented the basis of a framework to account for the dynamics of contacts in epidemic processes, through the notion of dynamic proximity graphs. By varying the integration time-parameter T, which is the period of infectivity one can give a simple account for some of the differences in the observed contact networks for different diseases, such as smallpox, or AIDS. Our simplistic model also seems to shed some light on the shape of the degree distribution of the measured people-people contact network from the EPISIM data. We certainly do not claim that the simplistic graph integration model above is a good model for dynamic contact graphs. It only contains the essential ingredients for such processes to produce a qualitative agreement with some observations. We expect that further refinements and extensions to this picture, in particular deriving the link-probabilities in the dynamic proximity graph from more realistic contact dynamics should improve the agreement between models and data.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1995-01-01
Intelligent systems require software incorporating probabilistic reasoning, and often times learning. Networks provide a framework and methodology for creating this kind of software. This paper introduces network models based on chain graphs with deterministic nodes. Chain graphs are defined as a hierarchical combination of Bayesian and Markov networks. To model learning, plates on chain graphs are introduced to model independent samples. The paper concludes by discussing various operations that can be performed on chain graphs with plates as a simplification process or to generate learning algorithms.
Quantum gravity as an information network self-organization of a 4D universe
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-10-01
I propose a quantum gravity model in which the fundamental degrees of freedom are information bits for both discrete space-time points and links connecting them. The Hamiltonian is a very simple network model consisting of a ferromagnetic Ising model for space-time vertices and an antiferromagnetic Ising model for the links. As a result of the frustration between these two terms, the ground state self-organizes as a new type of low-clustering graph with finite Hausdorff dimension 4. The spectral dimension is lower than the Hausdorff dimension: it coincides with the Hausdorff dimension 4 at a first quantum phase transition corresponding to an IR fixed point, while at a second quantum phase transition describing small scales space-time dissolves into disordered information bits. The large-scale dimension 4 of the universe is related to the upper critical dimension 4 of the Ising model. At finite temperatures the universe graph emerges without a big bang and without singularities from a ferromagnetic phase transition in which space-time itself forms out of a hot soup of information bits. When the temperature is lowered the universe graph unfolds and expands by lowering its connectivity, a mechanism I have called topological expansion. The model admits topological black hole excitations corresponding to graphs containing holes with no space-time inside and with "Schwarzschild-like" horizons with a lower spectral dimension.
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
Verification of hypergraph states
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito
2017-12-01
Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.
The structured ancestral selection graph and the many-demes limit.
Slade, Paul F; Wakeley, John
2005-02-01
We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.
Pan, Yongke; Niu, Wenjia
2017-01-01
Semisupervised Discriminant Analysis (SDA) is a semisupervised dimensionality reduction algorithm, which can easily resolve the out-of-sample problem. Relative works usually focus on the geometric relationships of data points, which are not obvious, to enhance the performance of SDA. Different from these relative works, the regularized graph construction is researched here, which is important in the graph-based semisupervised learning methods. In this paper, we propose a novel graph for Semisupervised Discriminant Analysis, which is called combined low-rank and k-nearest neighbor (LRKNN) graph. In our LRKNN graph, we map the data to the LR feature space and then the kNN is adopted to satisfy the algorithmic requirements of SDA. Since the low-rank representation can capture the global structure and the k-nearest neighbor algorithm can maximally preserve the local geometrical structure of the data, the LRKNN graph can significantly improve the performance of SDA. Extensive experiments on several real-world databases show that the proposed LRKNN graph is an efficient graph constructor, which can largely outperform other commonly used baselines. PMID:28316616
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Graph reconstruction using covariance-based methods.
Sulaimanov, Nurgazy; Koeppl, Heinz
2016-12-01
Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.
Queues on a Dynamically Evolving Graph
NASA Astrophysics Data System (ADS)
Mandjes, Michel; Starreveld, Nicos J.; Bekker, René
2018-04-01
This paper considers a population process on a dynamically evolving graph, which can be alternatively interpreted as a queueing network. The queues are of infinite-server type, entailing that at each node all customers present are served in parallel. The links that connect the queues have the special feature that they are unreliable, in the sense that their status alternates between `up' and `down'. If a link between two nodes is down, with a fixed probability each of the clients attempting to use that link is lost; otherwise the client remains at the origin node and reattempts using the link (and jumps to the destination node when it finds the link restored). For these networks we present the following results: (a) a system of coupled partial differential equations that describes the joint probability generating function corresponding to the queues' time-dependent behavior (and a system of ordinary differential equations for its stationary counterpart), (b) an algorithm to evaluate the (time-dependent and stationary) moments, and procedures to compute user-perceived performance measures which facilitate the quantification of the impact of the links' outages, (c) a diffusion limit for the joint queue length process. We include explicit results for a series relevant special cases, such as tandem networks and symmetric fully connected networks.
A graph model for preventing railway accidents based on the maximal information coefficient
NASA Astrophysics Data System (ADS)
Shao, Fubo; Li, Keping
2017-01-01
A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.
Breaking of Ensemble Equivalence in Networks
NASA Astrophysics Data System (ADS)
Squartini, Tiziano; de Mol, Joey; den Hollander, Frank; Garlaschelli, Diego
2015-12-01
It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.
Reflecting on Graphs: Attributes of Graph Choice and Construction Practices in Biology
Angra, Aakanksha; Gardner, Stephanie M.
2017-01-01
Undergraduate biology education reform aims to engage students in scientific practices such as experimental design, experimentation, and data analysis and communication. Graphs are ubiquitous in the biological sciences, and creating effective graphical representations involves quantitative and disciplinary concepts and skills. Past studies document student difficulties with graphing within the contexts of classroom or national assessments without evaluating student reasoning. Operating under the metarepresentational competence framework, we conducted think-aloud interviews to reveal differences in reasoning and graph quality between undergraduate biology students, graduate students, and professors in a pen-and-paper graphing task. All professors planned and thought about data before graph construction. When reflecting on their graphs, professors and graduate students focused on the function of graphs and experimental design, while most undergraduate students relied on intuition and data provided in the task. Most undergraduate students meticulously plotted all data with scaled axes, while professors and some graduate students transformed the data, aligned the graph with the research question, and reflected on statistics and sample size. Differences in reasoning and approaches taken in graph choice and construction corroborate and extend previous findings and provide rich targets for undergraduate and graduate instruction. PMID:28821538
Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen
2017-01-01
An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.
NASA Astrophysics Data System (ADS)
Nesvold, E.; Mukerji, T.
2017-12-01
River deltas display complex channel networks that can be characterized through the framework of graph theory, as shown by Tejedor et al. (2015). Deltaic patterns may also be useful in a Bayesian approach to uncertainty quantification of the subsurface, but this requires a prior distribution of the networks of ancient deltas. By considering subaerial deltas, one can at least obtain a snapshot in time of the channel network spectrum across deltas. In this study, the directed graph structure is semi-automatically extracted from satellite imagery using techniques from statistical processing and machine learning. Once the network is labeled with vertices and edges, spatial trends and width and sinuosity distributions can also be found easily. Since imagery is inherently 2D, computational sediment transport models can serve as a link between 2D network structure and 3D depositional elements; the numerous empirical rules and parameters built into such models makes it necessary to validate the output with field data. For this purpose we have used a set of 110 modern deltas, with average water discharge ranging from 10 - 200,000 m3/s, as a benchmark for natural variability. Both graph theoretic and more general distributions are established. A key question is whether it is possible to reproduce this deltaic network spectrum with computational models. Delft3D was used to solve the shallow water equations coupled with sediment transport. The experimental setup was relatively simple; incoming channelized flow onto a tilted plane, with varying wave and tidal energy, sediment types and grain size distributions, river discharge and a few other input parameters. Each realization was run until a delta had fully developed: between 50 and 500 years (with a morphology acceleration factor). It is shown that input parameters should not be sampled independently from the natural ranges, since this may result in deltaic output that falls well outside the natural spectrum. Since we are interested in studying the patterns occurring in nature, ideas are proposed for how to sample computer realizations that match this distribution. By establishing a link between surface based patterns from the field with the associated subsurface structure from physics-based models, this is a step towards a fully Bayesian workflow in subsurface simulation.
Analysis Tools for Interconnected Boolean Networks With Biological Applications.
Chaves, Madalena; Tournier, Laurent
2018-01-01
Boolean networks with asynchronous updates are a class of logical models particularly well adapted to describe the dynamics of biological networks with uncertain measures. The state space of these models can be described by an asynchronous state transition graph, which represents all the possible exits from every single state, and gives a global image of all the possible trajectories of the system. In addition, the asynchronous state transition graph can be associated with an absorbing Markov chain, further providing a semi-quantitative framework where it becomes possible to compute probabilities for the different trajectories. For large networks, however, such direct analyses become computationally untractable, given the exponential dimension of the graph. Exploiting the general modularity of biological systems, we have introduced the novel concept of asymptotic graph , computed as an interconnection of several asynchronous transition graphs and recovering all asymptotic behaviors of a large interconnected system from the behavior of its smaller modules. From a modeling point of view, the interconnection of networks is very useful to address for instance the interplay between known biological modules and to test different hypotheses on the nature of their mutual regulatory links. This paper develops two new features of this general methodology: a quantitative dimension is added to the asymptotic graph, through the computation of relative probabilities for each final attractor and a companion cross-graph is introduced to complement the method on a theoretical point of view.
Graph-theoretic strengths of contextuality
NASA Astrophysics Data System (ADS)
de Silva, Nadish
2017-03-01
Cabello-Severini-Winter and Abramsky-Hardy (building on the framework of Abramsky-Brandenburger) both provide classes of Bell and contextuality inequalities for very general experimental scenarios using vastly different mathematical techniques. We review both approaches, carefully detail the links between them, and give simple, graph-theoretic methods for finding inequality-free proofs of nonlocality and contextuality and for finding states exhibiting strong nonlocality and/or contextuality. Finally, we apply these methods to concrete examples in stabilizer quantum mechanics relevant to understanding contextuality as a resource in quantum computation.
Wedge sampling for computing clustering coefficients and triangle counts on large graphs
Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.
2014-05-08
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less
NASA Astrophysics Data System (ADS)
Chen, Zigang; Li, Lixiang; Peng, Haipeng; Liu, Yuhong; Yang, Yixian
2018-04-01
Community mining for complex social networks with link and attribute information plays an important role according to different application needs. In this paper, based on our proposed general non-negative matrix factorization (GNMF) algorithm without dimension matching constraints in our previous work, we propose the joint GNMF with graph Laplacian (LJGNMF) to implement community mining of complex social networks with link and attribute information according to different application needs. Theoretical derivation result shows that the proposed LJGNMF is fully compatible with previous methods of integrating traditional NMF and symmetric NMF. In addition, experimental results show that the proposed LJGNMF can meet the needs of different community minings by adjusting its parameters, and the effect is better than traditional NMF in the community vertices attributes entropy.
Social capital calculations in economic systems: Experimental study
NASA Astrophysics Data System (ADS)
Chepurov, E. G.; Berg, D. B.; Zvereva, O. M.; Nazarova, Yu. Yu.; Chekmarev, I. V.
2017-11-01
The paper describes the social capital study for a system where actors are engaged in an economic activity. The focus is on the analysis of communications structural parameters (transactions) between the actors. Comparison between transaction network graph structure and the structure of a random Bernoulli graph of the same dimension and density allows revealing specific structural features of the economic system under study. Structural analysis is based on SNA-methodology (SNA - Social Network Analysis). It is shown that structural parameter values of the graph formed by agent relationship links may well characterize different aspects of the social capital structure. The research advocates that it is useful to distinguish the difference between each agent social capital and the whole system social capital.
Spectra of Adjacency Matrices in Networks of Extreme Introverts and Extroverts
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Ezzatabadipour, Mohammadmehdi; Zia, R. K. P.
Many interesting properties were discovered in recent studies of preferred degree networks, suitable for describing social behavior of individuals who tend to prefer a certain number of contacts. In an extreme version (coined the XIE model), introverts always cut links while extroverts always add them. While the intra-group links are static, the cross-links are dynamic and lead to an ensemble of bipartite graphs, with extraordinary correlations between elements of the incidence matrix: nij In the steady state, this system can be regarded as one in thermal equilibrium with long-ranged interactions between the nij's, and displays an extreme Thouless effect. Here, we report simulation studies of a different perspective of networks, namely, the spectra associated with this ensemble of adjacency matrices {aij } . As a baseline, we first consider the spectra associated with a simple random (Erdős-Rényi) ensemble of bipartite graphs, where simulation results can be understood analytically. Work supported by the NSF through Grant DMR-1507371.
Efficient Wide Baseline Structure from Motion
NASA Astrophysics Data System (ADS)
Michelini, Mario; Mayer, Helmut
2016-06-01
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
Probabilistic generation of random networks taking into account information on motifs occurrence.
Bois, Frederic Y; Gayraud, Ghislaine
2015-01-01
Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.
Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence
Bois, Frederic Y.
2015-01-01
Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547
NASA Astrophysics Data System (ADS)
Weigel, T.; Toussaiant, F.; Stockhause, M.; Höck, H.; Kindermann, S.; Lautenschlager, M.; Ludwig, T.
2012-12-01
We propose a wide adoption of structural elements (typed links, collections, trees) in the Handle System to improve identification and access of scientific data, metadata and software as well as traceability of data provenance. Typed links target the issue of data provenance as a means to assess the quality of scientific data. Data provenance is seen here as a directed acyclic graph with nodes representing data and vertices representing derivative operations (Moreau 2010). Landing pages can allow a human user to explore the provenance graph back to the primary unprocessed data, thereby also giving credit to the original data producer. As in Earth System Modeling no single infrastructure with complete data lifecycle coverage exists, we propose to split the problem domain in two parts. Project-specific infrastructures such as the German project C3-Grid or the Earth System Grid Federation (ESGF) for CMIP5 data are aware of data and data operations (Toussaint et al. 2012) and can thus detect and accumulate single nodes and vertices in the provenance graph, assigning Handles to data, metadata and software. With a common schema for typed links, the provenance graph is established as downstream infrastructures refer incoming Handles. Data in this context is for example hierarchically structured Earth System model output data, which receives DataCite DOIs only for the most coarse-granular elements. Using Handle tree structures, the lower levels of the hierarchy can also receive Handles, allowing authors to more precisely identify the data they used (Lawrence et al. 2011). We can e.g. define a DOI for just the 2m-temperature variable of CMIP5 data across many CMIP5 experiments or a DOI for model and observational data coming from different sources. The structural elements should be implemented through Handle values at the Handle infrastructure level for two reasons. Handle values are more durable than downstream websites or databases, and thus the provenance chain does not break if individual links become unavailable. Secondly, a single service cannot interpret links if downstream solutions differ in their implementation schemas. Emerging efforts driven by the European Persistent Identifier Consortium (EPIC) aim to establish a default mechanism for structural elements at the Handle level. We motivate to make applications, which take part in the data lifecycle, aware of data derivation provenance and let them provide additional elements to the provenance graph. Since they are also Handles, DataCite DOIs can act as a corner stone and provide an entry point to discover the provenance graph. References B. Lawrence, C. Jones, B. Matthews, S. Pepler, and S. Callaghan, "Citation and peer review of data: Moving towards formal data publication," Int. J. of Digital Curation, vol. 6, no. 2, 2011. L. Moreau, "The foundations for provenance on the web," Foundations and Trends® in Web Science, vol. 2, no. 2-3, pp. 99-241, 2010. F. Toussaint, T. Weigel, H. Thiemann, H. Höck, M. Stockhause: "Application Examples for Handle System Usage", submitted to AGU 2012 session IN009.
A graph-based approach to auditing RxNorm.
Bodenreider, Olivier; Peters, Lee B
2009-06-01
RxNorm is a standardized nomenclature for clinical drug entities developed by the National Library of Medicine. In this paper, we audit relations in RxNorm for consistency and completeness through the systematic analysis of the graph of its concepts and relationships. The representation of multi-ingredient drugs is normalized in order to make it compatible with that of single-ingredient drugs. All meaningful paths between two nodes in the type graph are computed and instantiated. Alternate paths are automatically compared and manually inspected in case of inconsistency. The 115 meaningful paths identified in the type graph can be grouped into 28 groups with respect to start and end nodes. Of the 19 groups of alternate paths (i.e., with two or more paths) between the start and end nodes, 9 (47%) exhibit inconsistencies. Overall, 28 (24%) of the 115 paths are inconsistent with other alternate paths. A total of 348 inconsistencies were identified in the April 2008 version of RxNorm and reported to the RxNorm team, of which 215 (62%) had been corrected in the January 2009 version of RxNorm. The inconsistencies identified involve missing nodes (93), missing links (17), extraneous links (237) and one case of mix-up between two ingredients. Our auditing method proved effective in identifying a limited number of errors that had defeated the quality assurance mechanisms currently in place in the RxNorm production system. Some recommendations for the development of RxNorm are provided.
Hierarchical sequencing of online social graphs
NASA Astrophysics Data System (ADS)
Andjelković, Miroslav; Tadić, Bosiljka; Maletić, Slobodan; Rajković, Milan
2015-10-01
In online communications, patterns of conduct of individual actors and use of emotions in the process can lead to a complex social graph exhibiting multilayered structure and mesoscopic communities. Using simplicial complexes representation of graphs, we investigate in-depth topology of the online social network constructed from MySpace dialogs which exhibits original community structure. A simulation of emotion spreading in this network leads to the identification of two emotion-propagating layers. Three topological measures are introduced, referred to as the structure vectors, which quantify graph's architecture at different dimension levels. Notably, structures emerging through shared links, triangles and tetrahedral faces, frequently occur and range from tree-like to maximal 5-cliques and their respective complexes. On the other hand, the structures which spread only negative or only positive emotion messages appear to have much simpler topology consisting of links and triangles. The node's structure vector represents the number of simplices at each topology level in which the node resides and the total number of such simplices determines what we define as the node's topological dimension. The presented results suggest that the node's topological dimension provides a suitable measure of the social capital which measures the actor's ability to act as a broker in compact communities, the so called Simmelian brokerage. We also generalize the results to a wider class of computer-generated networks. Investigating components of the node's vector over network layers reveals that same nodes develop different socio-emotional relations and that the influential nodes build social capital by combining their connections in different layers.
A Graph-based Approach to Auditing RxNorm
Bodenreider, Olivier; Peters, Lee B.
2009-01-01
Objectives RxNorm is a standardized nomenclature for clinical drug entities developed by the National Library of Medicine. In this paper, we audit relations in RxNorm for consistency and completeness through the systematic analysis of the graph of its concepts and relationships. Methods The representation of multi-ingredient drugs is normalized in order to make it compatible with that of single-ingredient drugs. All meaningful paths between two nodes in the type graph are computed and instantiated. Alternate paths are automatically compared and manually inspected in case of inconsistency. Results The 115 meaningful paths identified in the type graph can be grouped into 28 groups with respect to start and end nodes. Of the 19 groups of alternate paths (i.e., with two or more paths) between the start and end nodes, 9 (47%) exhibit inconsistencies. Overall, 28 (24%) of the 115 paths are inconsistent with other alternate paths. A total of 348 inconsistencies were identified in the April 2008 version of RxNorm and reported to the RxNorm team, of which 215 (62%) had been corrected in the January 2009 version of RxNorm. Conclusion The inconsistencies identified involve missing nodes (93), missing links (17), extraneous links (237) and one case of mix-up between two ingredients. Our auditing method proved effective in identifying a limited number of errors that had defeated the quality assurance mechanisms currently in place in the RxNorm production system. Some recommendations for the development of RxNorm are provided. PMID:19394440
Eronen, Lauri; Toivonen, Hannu
2012-06-06
Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.
Han, Liang-Feng; Plummer, Niel; Aggarwal, Pradeep
2012-01-01
A graphical method is described for identifying geochemical reactions needed in the interpretation of radiocarbon age in groundwater systems. Graphs are constructed by plotting the measured 14C, δ13C, and concentration of dissolved inorganic carbon and are interpreted according to specific criteria to recognize water samples that are consistent with a wide range of processes, including geochemical reactions, carbon isotopic exchange, 14C decay, and mixing of waters. The graphs are used to provide a qualitative estimate of radiocarbon age, to deduce the hydrochemical complexity of a groundwater system, and to compare samples from different groundwater systems. Graphs of chemical and isotopic data from a series of previously-published groundwater studies are used to demonstrate the utility of the approach. Ultimately, the information derived from the graphs is used to improve geochemical models for adjustment of radiocarbon ages in groundwater systems.
Large fluctuations in anti-coordination games on scale-free graphs
NASA Astrophysics Data System (ADS)
Sabsovich, Daniel; Mobilia, Mauro; Assaf, Michael
2017-05-01
We study the influence of the complex topology of scale-free graphs on the dynamics of anti-coordination games (e.g. snowdrift games). These reference models are characterized by the coexistence (evolutionary stable mixed strategy) of two competing species, say ‘cooperators’ and ‘defectors’, and, in finite systems, by metastability and large-fluctuation-driven fixation. In this work, we use extensive computer simulations and an effective diffusion approximation (in the weak selection limit) to determine under which circumstances, depending on the individual-based update rules, the topology drastically affects the long-time behavior of anti-coordination games. In particular, we compute the variance of the number of cooperators in the metastable state and the mean fixation time when the dynamics is implemented according to the voter model (death-first/birth-second process) and the link dynamics (birth/death or death/birth at random). For the voter update rule, we show that the scale-free topology effectively renormalizes the population size and as a result the statistics of observables depend on the network’s degree distribution. In contrast, such a renormalization does not occur with the link dynamics update rule and we recover the same behavior as on complete graphs.
Daianu, Madelaine; Mezher, Adam; Jahanshad, Neda; Hibar, Derrek P.; Nir, Talia M.; Jack, Clifford R.; Weiner, Michael W.; Bernstein, Matt A.; Thompson, Paul M.
2015-01-01
Our understanding of network breakdown in Alzheimer’s disease (AD) is likely to be enhanced through advanced mathematical descriptors. Here, we applied spectral graph theory to provide novel metrics of structural connectivity based on 3-Tesla diffusion weighted images in 42 AD patients and 50 healthy controls. We reconstructed connectivity networks using whole-brain tractography and examined, for the first time here, cortical disconnection based on the graph energy and spectrum. We further assessed supporting metrics - link density and nodal strength - to better interpret our results. Metrics were analyzed in relation to the well-known APOE-4 genetic risk factor for late-onset AD. The number of disconnected cortical regions increased with the number of copies of the APOE-4 risk gene in people with AD. Each additional copy of the APOE-4 risk gene may lead to more dysfunctional networks with weakened or abnormal connections, providing evidence for the previously hypothesized “disconnection syndrome”. PMID:26413205
Daianu, Madelaine; Mezher, Adam; Jahanshad, Neda; Hibar, Derrek P; Nir, Talia M; Jack, Clifford R; Weiner, Michael W; Bernstein, Matt A; Thompson, Paul M
2015-04-01
Our understanding of network breakdown in Alzheimer's disease (AD) is likely to be enhanced through advanced mathematical descriptors. Here, we applied spectral graph theory to provide novel metrics of structural connectivity based on 3-Tesla diffusion weighted images in 42 AD patients and 50 healthy controls. We reconstructed connectivity networks using whole-brain tractography and examined, for the first time here, cortical disconnection based on the graph energy and spectrum. We further assessed supporting metrics - link density and nodal strength - to better interpret our results. Metrics were analyzed in relation to the well-known APOE -4 genetic risk factor for late-onset AD. The number of disconnected cortical regions increased with the number of copies of the APOE -4 risk gene in people with AD. Each additional copy of the APOE -4 risk gene may lead to more dysfunctional networks with weakened or abnormal connections, providing evidence for the previously hypothesized "disconnection syndrome".
FEDFacts: Information about the Federal Electronic Docket Facilities
Cleanup status information related to Federal Facilities contained in EPA's Federal Agency Hazardous Waste Compliance Docket. Information includes maps, lists of facilities, dashboard view with graphs, links to community resources, and news items.
A Bayesian method for inferring transmission chains in a partially observed epidemic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef M.; Ray, Jaideep
2008-10-01
We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historicalmore » data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.« less
Property Graph vs RDF Triple Store: A Comparison on Glycan Substructure Search
Alocci, Davide; Mariethoz, Julien; Horlacher, Oliver; Bolleman, Jerven T.; Campbell, Matthew P.; Lisacek, Frederique
2015-01-01
Resource description framework (RDF) and Property Graph databases are emerging technologies that are used for storing graph-structured data. We compare these technologies through a molecular biology use case: glycan substructure search. Glycans are branched tree-like molecules composed of building blocks linked together by chemical bonds. The molecular structure of a glycan can be encoded into a direct acyclic graph where each node represents a building block and each edge serves as a chemical linkage between two building blocks. In this context, Graph databases are possible software solutions for storing glycan structures and Graph query languages, such as SPARQL and Cypher, can be used to perform a substructure search. Glycan substructure searching is an important feature for querying structure and experimental glycan databases and retrieving biologically meaningful data. This applies for example to identifying a region of the glycan recognised by a glycan binding protein (GBP). In this study, 19,404 glycan structures were selected from GlycomeDB (www.glycome-db.org) and modelled for being stored into a RDF triple store and a Property Graph. We then performed two different sets of searches and compared the query response times and the results from both technologies to assess performance and accuracy. The two implementations produced the same results, but interestingly we noted a difference in the query response times. Qualitative measures such as portability were also used to define further criteria for choosing the technology adapted to solving glycan substructure search and other comparable issues. PMID:26656740
, graphs, or information about datasets). A RESTful web service (external link) - a URL that computer to get the same information in a more computer-program-friendly format like JSON (external link .jsonlKVP, where column names are on every row): Each column has a column name and one type of information
A distributed algorithm to maintain and repair the trail networks of arboreal ants.
Chandrasekhar, Arjun; Gordon, Deborah M; Navlakha, Saket
2018-06-18
We study how the arboreal turtle ant (Cephalotes goniodontus) solves a fundamental computing problem: maintaining a trail network and finding alternative paths to route around broken links in the network. Turtle ants form a routing backbone of foraging trails linking several nests and temporary food sources. This species travels only in the trees, so their foraging trails are constrained to lie on a natural graph formed by overlapping branches and vines in the tangled canopy. Links between branches, however, can be ephemeral, easily destroyed by wind, rain, or animal movements. Here we report a biologically feasible distributed algorithm, parameterized using field data, that can plausibly describe how turtle ants maintain the routing backbone and find alternative paths to circumvent broken links in the backbone. We validate the ability of this probabilistic algorithm to circumvent simulated breaks in synthetic and real-world networks, and we derive an analytic explanation for why certain features are crucial to improve the algorithm's success. Our proposed algorithm uses fewer computational resources than common distributed graph search algorithms, and thus may be useful in other domains, such as for swarm computing or for coordinating molecular robots.
Focus-based filtering + clustering technique for power-law networks with small world phenomenon
NASA Astrophysics Data System (ADS)
Boutin, François; Thièvre, Jérôme; Hascoët, Mountaz
2006-01-01
Realistic interaction networks usually present two main properties: a power-law degree distribution and a small world behavior. Few nodes are linked to many nodes and adjacent nodes are likely to share common neighbors. Moreover, graph structure usually presents a dense core that is difficult to explore with classical filtering and clustering techniques. In this paper, we propose a new filtering technique accounting for a user-focus. This technique extracts a tree-like graph with also power-law degree distribution and small world behavior. Resulting structure is easily drawn with classical force-directed drawing algorithms. It is also quickly clustered and displayed into a multi-level silhouette tree (MuSi-Tree) from any user-focus. We built a new graph filtering + clustering + drawing API and report a case study.
Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.
Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang
2017-01-01
Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.
Entropy of spatial network ensembles
NASA Astrophysics Data System (ADS)
Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis
2018-04-01
We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.
Toward the optimization of normalized graph Laplacian.
Xie, Bo; Wang, Meng; Tao, Dacheng
2011-04-01
Normalized graph Laplacian has been widely used in many practical machine learning algorithms, e.g., spectral clustering and semisupervised learning. However, all of them use the Euclidean distance to construct the graph Laplacian, which does not necessarily reflect the inherent distribution of the data. In this brief, we propose a method to directly optimize the normalized graph Laplacian by using pairwise constraints. The learned graph is consistent with equivalence and nonequivalence pairwise relationships, and thus it can better represent similarity between samples. Meanwhile, our approach, unlike metric learning, automatically determines the scale factor during the optimization. The learned normalized Laplacian matrix can be directly applied in spectral clustering and semisupervised learning algorithms. Comprehensive experiments demonstrate the effectiveness of the proposed approach.
Face recognition based on two-dimensional discriminant sparse preserving projection
NASA Astrophysics Data System (ADS)
Zhang, Dawei; Zhu, Shanan
2018-04-01
In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.
Mathematics of Web science: structure, dynamics and incentives.
Chayes, Jennifer
2013-03-28
Dr Chayes' talk described how, to a discrete mathematician, 'all the world's a graph, and all the people and domains merely vertices'. A graph is represented as a set of vertices V and a set of edges E, so that, for instance, in the World Wide Web, V is the set of pages and E the directed hyperlinks; in a social network, V is the people and E the set of relationships; and in the autonomous system Internet, V is the set of autonomous systems (such as AOL, Yahoo! and MSN) and E the set of connections. This means that mathematics can be used to study the Web (and other large graphs in the online world) in the following way: first, we can model online networks as large finite graphs; second, we can sample pieces of these graphs; third, we can understand and then control processes on these graphs; and fourth, we can develop algorithms for these graphs and apply them to improve the online experience.
Phase-locked patterns of the Kuramoto model on 3-regular graphs
NASA Astrophysics Data System (ADS)
DeVille, Lee; Ermentrout, Bard
2016-09-01
We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.
Phase-locked patterns of the Kuramoto model on 3-regular graphs.
DeVille, Lee; Ermentrout, Bard
2016-09-01
We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.
linkedISA: semantic representation of ISA-Tab experimental metadata.
González-Beltrán, Alejandra; Maguire, Eamonn; Sansone, Susanna-Assunta; Rocca-Serra, Philippe
2014-01-01
Reporting and sharing experimental metadata- such as the experimental design, characteristics of the samples, and procedures applied, along with the analysis results, in a standardised manner ensures that datasets are comprehensible and, in principle, reproducible, comparable and reusable. Furthermore, sharing datasets in formats designed for consumption by humans and machines will also maximize their use. The Investigation/Study/Assay (ISA) open source metadata tracking framework facilitates standards-compliant collection, curation, visualization, storage and sharing of datasets, leveraging on other platforms to enable analysis and publication. The ISA software suite includes several components used in increasingly diverse set of life science and biomedical domains; it is underpinned by a general-purpose format, ISA-Tab, and conversions exist into formats required by public repositories. While ISA-Tab works well mainly as a human readable format, we have also implemented a linked data approach to semantically define the ISA-Tab syntax. We present a semantic web representation of the ISA-Tab syntax that complements ISA-Tab's syntactic interoperability with semantic interoperability. We introduce the linkedISA conversion tool from ISA-Tab to the Resource Description Framework (RDF), supporting mappings from the ISA syntax to multiple community-defined, open ontologies and capitalising on user-provided ontology annotations in the experimental metadata. We describe insights of the implementation and how annotations can be expanded driven by the metadata. We applied the conversion tool as part of Bio-GraphIIn, a web-based application supporting integration of the semantically-rich experimental descriptions. Designed in a user-friendly manner, the Bio-GraphIIn interface hides most of the complexities to the users, exposing a familiar tabular view of the experimental description to allow seamless interaction with the RDF representation, and visualising descriptors to drive the query over the semantic representation of the experimental design. In addition, we defined queries over the linkedISA RDF representation and demonstrated its use over the linkedISA conversion of datasets from Nature' Scientific Data online publication. Our linked data approach has allowed us to: 1) make the ISA-Tab semantics explicit and machine-processable, 2) exploit the existing ontology-based annotations in the ISA-Tab experimental descriptions, 3) augment the ISA-Tab syntax with new descriptive elements, 4) visualise and query elements related to the experimental design. Reasoning over ISA-Tab metadata and associated data will facilitate data integration and knowledge discovery.
Analysis of graphical representation among freshmen in undergraduate physics laboratory
NASA Astrophysics Data System (ADS)
Adam, A. S.; Anggrayni, S.; Kholiq, A.; Putri, N. P.; Suprapto, N.
2018-03-01
Physics concept understanding is the importance of the physics laboratory among freshmen in the undergraduate program. These include the ability to interpret the meaning of the graph to make an appropriate conclusion. This particular study analyses the graphical representation among freshmen in an undergraduate physics laboratory. This study uses empirical study with quantitative approach. The graphical representation covers 3 physics topics: velocity of sound, simple pendulum and spring system. The result of this study shows most of the freshmen (90% of the sample) make a graph based on the data from physics laboratory. It means the transferring process of raw data which illustrated in the table to physics graph can be categorised. Most of the Freshmen use the proportional principle of the variable in graph analysis. However, Freshmen can't make the graph in an appropriate variable to gain more information and can't analyse the graph to obtain the useful information from the slope.
NASA Astrophysics Data System (ADS)
Volkov, Sergey
2017-11-01
This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.
Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519
Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.
Route Network Construction with Location-Direction-Enabled Photographs
NASA Astrophysics Data System (ADS)
Fujita, Hideyuki; Sagara, Shota; Ohmori, Tadashi; Shintani, Takahiko
2018-05-01
We propose a method for constructing a geometric graph for generating routes that summarize a geographical area and also have visual continuity by using a set of location-direction-enabled photographs. A location- direction-enabled photograph is a photograph that has information about the location (position of the camera at the time of shooting) and the direction (direction of the camera at the time of shooting). Each nodes of the graph corresponds to a location-direction-enabled photograph. The location of each node is the location of the corresponding photograph, and a route on the graph corresponds to a route in the geographic area and a sequence of photographs. The proposed graph is constructed to represent characteristic spots and paths linking the spots, and it is assumed to be a kind of a spatial summarization of the area with the photographs. Therefore, we call the routes on the graph as spatial summary route. Each route on the proposed graph also has a visual continuity, which means that we can understand the spatial relationship among the continuous photographs on the route such as moving forward, backward, turning right, etc. In this study, when the changes in the shooting position and shooting direction satisfied a given threshold, the route was defined to have visual continuity. By presenting the photographs in order along the generated route, information can be presented sequentially, while maintaining visual continuity to a great extent.
Affinity learning with diffusion on tensor product graph.
Yang, Xingwei; Prasad, Lakshman; Latecki, Longin Jan
2013-01-01
In many applications, we are given a finite set of data points sampled from a data manifold and represented as a graph with edge weights determined by pairwise similarities of the samples. Often the pairwise similarities (which are also called affinities) are unreliable due to noise or due to intrinsic difficulties in estimating similarity values of the samples. As observed in several recent approaches, more reliable similarities can be obtained if the original similarities are diffused in the context of other data points, where the context of each point is a set of points most similar to it. Compared to the existing methods, our approach differs in two main aspects. First, instead of diffusing the similarity information on the original graph, we propose to utilize the tensor product graph (TPG) obtained by the tensor product of the original graph with itself. Since TPG takes into account higher order information, it is not a surprise that we obtain more reliable similarities. However, it comes at the price of higher order computational complexity and storage requirement. The key contribution of the proposed approach is that the information propagation on TPG can be computed with the same computational complexity and the same amount of storage as the propagation on the original graph. We prove that a graph diffusion process on TPG is equivalent to a novel iterative algorithm on the original graph, which is guaranteed to converge. After its convergence we obtain new edge weights that can be interpreted as new, learned affinities. We stress that the affinities are learned in an unsupervised setting. We illustrate the benefits of the proposed approach for data manifolds composed of shapes, images, and image patches on two very different tasks of image retrieval and image segmentation. With learned affinities, we achieve the bull's eye retrieval score of 99.99 percent on the MPEG-7 shape dataset, which is much higher than the state-of-the-art algorithms. When the data- points are image patches, the NCut with the learned affinities not only significantly outperforms the NCut with the original affinities, but it also outperforms state-of-the-art image segmentation methods.
Assessing the role of landscape connectivity on Opisthorchis viverrini transmission dynamics.
Wang, Yi-Chen; Yuen, Roy; Feng, Chen-Chieh; Sithithaworn, Paiboon; Kim, Ick-Hoi
2017-08-01
Opisthorchis viverrini (Ov) is one of the most important human parasitic diseases in Southeast Asia. Although the concept of connectivity is widely used to comprehend disease dispersal, knowledge of the influences of landscape connectivity on Ov transmission is still rudimentary. This study aimed to investigate the role of landscape connectivity in Ov transmission between the human and the first intermediate snail hosts. Fieldwork was conducted in three villages respectively in Kamalasai District, Kalasin Province, Phu Wiang District, Khon Kaen Province, and Nong Saeng District, Udon Thani Province. Bithynia snails were collected to examine parasitic infections, water samples were analyzed for fecal contamination, and locations of septic tanks and connections between habitat patches with observable water movement were surveyed. Euclidean distance, topological link and distance, and graph measures were employed to quantify the connectivity between human and snail habitats. The findings showed that snail patches with higher fecal contents were generally located nearer to septic tanks. The statistically significant results for the topological link and distance measures highlighted the importance of water in functionally facilitating Ov transmission. Graph measures revealed differences in landscape connectivity across the sites. The site with the largest landscape component size and the most mutually connected snail patches coincided with the presence of Ov parasite, reinforcing its higher risk for human to snail transmission. The site with the dissected landscape structure potentially limited the transmission. This study underscored the potential effect of landscape connectivity on Ov transmission, contributing to the understanding of the spatial variation of Ov infection risk. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
Keller, Carmen; Junghans, Alex
2017-11-01
Individuals with low numeracy have difficulties with understanding complex graphs. Combining the information-processing approach to numeracy with graph comprehension and information-reduction theories, we examined whether high numerates' better comprehension might be explained by their closer attention to task-relevant graphical elements, from which they would expect numerical information to understand the graph. Furthermore, we investigated whether participants could be trained in improving their attention to task-relevant information and graph comprehension. In an eye-tracker experiment ( N = 110) involving a sample from the general population, we presented participants with 2 hypothetical scenarios (stomach cancer, leukemia) showing survival curves for 2 treatments. In the training condition, participants received written instructions on how to read the graph. In the control condition, participants received another text. We tracked participants' eye movements while they answered 9 knowledge questions. The sum constituted graph comprehension. We analyzed visual attention to task-relevant graphical elements by using relative fixation durations and relative fixation counts. The mediation analysis revealed a significant ( P < 0.05) indirect effect of numeracy on graph comprehension through visual attention to task-relevant information, which did not differ between the 2 conditions. Training had a significant main effect on visual attention ( P < 0.05) but not on graph comprehension ( P < 0.07). Individuals with high numeracy have better graph comprehension due to their greater attention to task-relevant graphical elements than individuals with low numeracy. With appropriate instructions, both groups can be trained to improve their graph-processing efficiency. Future research should examine (e.g., motivational) mediators between visual attention and graph comprehension to develop appropriate instructions that also result in higher graph comprehension.
EdgeMaps: visualizing explicit and implicit relations
NASA Astrophysics Data System (ADS)
Dörk, Marian; Carpendale, Sheelagh; Williamson, Carey
2011-01-01
In this work, we introduce EdgeMaps as a new method for integrating the visualization of explicit and implicit data relations. Explicit relations are specific connections between entities already present in a given dataset, while implicit relations are derived from multidimensional data based on shared properties and similarity measures. Many datasets include both types of relations, which are often difficult to represent together in information visualizations. Node-link diagrams typically focus on explicit data connections, while not incorporating implicit similarities between entities. Multi-dimensional scaling considers similarities between items, however, explicit links between nodes are not displayed. In contrast, EdgeMaps visualize both implicit and explicit relations by combining and complementing spatialization and graph drawing techniques. As a case study for this approach we chose a dataset of philosophers, their interests, influences, and birthdates. By introducing the limitation of activating only one node at a time, interesting visual patterns emerge that resemble the aesthetics of fireworks and waves. We argue that the interactive exploration of these patterns may allow the viewer to grasp the structure of a graph better than complex node-link visualizations.
Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer
Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki
2007-01-01
A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources. PMID:18974802
Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer.
Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki
2007-01-01
A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.
Tune the topology to create or destroy patterns
NASA Astrophysics Data System (ADS)
Asllani, Malbor; Carletti, Timoteo; Fanelli, Duccio
2016-12-01
We consider the dynamics of a reaction-diffusion system on a multigraph. The species share the same set of nodes but can access different links to explore the embedding spatial support. By acting on the topology of the networks we can control the ability of the system to self-organise in macroscopic patterns, emerging as a symmetry breaking instability of an homogeneous fixed point. Two different cases study are considered: on the one side, we produce a global modification of the networks, starting from the limiting setting where species are hosted on the same graph. On the other, we consider the effect of inserting just one additional single link to differentiate the two graphs. In both cases, patterns can be generated or destroyed, as follows the imposed, small, topological perturbation. Approximate analytical formulae allow to grasp the essence of the phenomenon and can potentially inspire innovative control strategies to shape the macroscopic dynamics on multigraph networks.
Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.
2014-01-01
We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
Graph Curvature for Differentiating Cancer Networks
Sandhu, Romeil; Georgiou, Tryphon; Reznik, Ed; Zhu, Liangjia; Kolesov, Ivan; Senbabaoglu, Yasin; Tannenbaum, Allen
2015-01-01
Cellular interactions can be modeled as complex dynamical systems represented by weighted graphs. The functionality of such networks, including measures of robustness, reliability, performance, and efficiency, are intrinsically tied to the topology and geometry of the underlying graph. Utilizing recently proposed geometric notions of curvature on weighted graphs, we investigate the features of gene co-expression networks derived from large-scale genomic studies of cancer. We find that the curvature of these networks reliably distinguishes between cancer and normal samples, with cancer networks exhibiting higher curvature than their normal counterparts. We establish a quantitative relationship between our findings and prior investigations of network entropy. Furthermore, we demonstrate how our approach yields additional, non-trivial pair-wise (i.e. gene-gene) interactions which may be disrupted in cancer samples. The mathematical formulation of our approach yields an exact solution to calculating pair-wise changes in curvature which was computationally infeasible using prior methods. As such, our findings lay the foundation for an analytical approach to studying complex biological networks. PMID:26169480
Eccentric connectivity index of chemical trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haoer, R. S., E-mail: raadsehen@gmail.com; Department of Mathematics, Faculty of Computer Sciences and Mathematics, University Of Kufa, Najaf; Atan, K. A., E-mail: kamel@upm.edu.my
Let G = (V, E) be a simple connected molecular graph. In such a simple molecular graph, vertices and edges are depicted atoms and chemical bonds respectively, we refer to the sets of vertices by V (G) and edges by E (G). If d(u, v) be distance between two vertices u, v ∈ V(G) and can be defined as the length of a shortest path joining them. Then, the eccentricity connectivity index (ECI) of a molecular graph G is ξ(G) = ∑{sub v∈V(G)} d(v) ec(v), where d(v) is degree of a vertex v ∈ V(G). ec(v) is the length ofmore » a greatest path linking to another vertex of v. In this study, we focus the general formula for the eccentricity connectivity index (ECI) of some chemical trees as alkenes.« less
Quist, Daniel A [Los Alamos, NM; Gavrilov, Eugene M [Los Alamos, NM; Fisk, Michael E [Jemez, NM
2008-01-15
A method enables the topology of an acyclic fully propagated network to be discovered. A list of switches that comprise the network is formed and the MAC address cache for each one of the switches is determined. For each pair of switches, from the MAC address caches the remaining switches that see the pair of switches are located. For each pair of switches the remaining switches are determined that see one of the pair of switches on a first port and the second one of the pair of switches on a second port. A list of insiders is formed for every pair of switches. It is determined whether the insider for each pair of switches is a graph edge and adjacent ones of the graph edges are determined. A symmetric adjacency matrix is formed from the graph edges to represent the topology of the data link network.
The graphical brain: Belief propagation and active inference
Friston, Karl J.; Parr, Thomas; de Vries, Bert
2018-01-01
This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models. Crucially, these models can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to elucidate the requisite message passing in terms of its form and scheduling. To accommodate mixed generative models (of discrete and continuous states), one also has to consider link nodes or factors that enable discrete and continuous representations to talk to each other. When mapping the implicit computational architecture onto neuronal connectivity, several interesting features emerge. For example, Bayesian model averaging and comparison, which link discrete and continuous states, may be implemented in thalamocortical loops. These and other considerations speak to a computational connectome that is inherently state dependent and self-organizing in ways that yield to a principled (variational) account. We conclude with simulations of reading that illustrate the implicit neuronal message passing, with a special focus on how discrete (semantic) representations inform, and are informed by, continuous (visual) sampling of the sensorium. Author Summary This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models that can entertain both discrete and continuous states. This leads to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to characterize the requisite message passing, and link this formal characterization to canonical microcircuits and extrinsic connectivity in the brain. PMID:29417960
Unsupervised active learning based on hierarchical graph-theoretic clustering.
Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve
2009-10-01
Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.
Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu
2007-01-01
This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, "GNP with Reinforcement Learning (GNPRL)" which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.
Building Scalable Knowledge Graphs for Earth Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.
2017-12-01
Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
The random fractional matching problem
NASA Astrophysics Data System (ADS)
Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele
2018-05-01
We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarocki, John Charles; Zage, David John; Fisher, Andrew N.
LinkShop is a software tool for applying the method of Linkography to the analysis time-sequence data. LinkShop provides command line, web, and application programming interfaces (API) for input and processing of time-sequence data, abstraction models, and ontologies. The software creates graph representations of the abstraction model, ontology, and derived linkograph. Finally, the tool allows the user to perform statistical measurements of the linkograph and refine the ontology through direct manipulation of the linkograph.
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
NASA Astrophysics Data System (ADS)
Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke
2017-05-01
Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.
Correlation based networks of equity returns sampled at different time horizons
NASA Astrophysics Data System (ADS)
Tumminello, M.; di Matteo, T.; Aste, T.; Mantegna, R. N.
2007-01-01
We investigate the planar maximally filtered graphs of the portfolio of the 300 most capitalized stocks traded at the New York Stock Exchange during the time period 2001 2003. Topological properties such as the average length of shortest paths, the betweenness and the degree are computed on different planar maximally filtered graphs generated by sampling the returns at different time horizons ranging from 5 min up to one trading day. This analysis confirms that the selected stocks compose a hierarchical system progressively structuring as the sampling time horizon increases. Finally, a cluster formation, associated to economic sectors, is quantitatively investigated.
Reflecting on Graphs: Attributes of Graph Choice and Construction Practices in Biology.
Angra, Aakanksha; Gardner, Stephanie M
2017-01-01
Undergraduate biology education reform aims to engage students in scientific practices such as experimental design, experimentation, and data analysis and communication. Graphs are ubiquitous in the biological sciences, and creating effective graphical representations involves quantitative and disciplinary concepts and skills. Past studies document student difficulties with graphing within the contexts of classroom or national assessments without evaluating student reasoning. Operating under the metarepresentational competence framework, we conducted think-aloud interviews to reveal differences in reasoning and graph quality between undergraduate biology students, graduate students, and professors in a pen-and-paper graphing task. All professors planned and thought about data before graph construction. When reflecting on their graphs, professors and graduate students focused on the function of graphs and experimental design, while most undergraduate students relied on intuition and data provided in the task. Most undergraduate students meticulously plotted all data with scaled axes, while professors and some graduate students transformed the data, aligned the graph with the research question, and reflected on statistics and sample size. Differences in reasoning and approaches taken in graph choice and construction corroborate and extend previous findings and provide rich targets for undergraduate and graduate instruction. © 2017 A. Angra and S. M. Gardner. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Semantic super networks: A case analysis of Wikipedia papers
NASA Astrophysics Data System (ADS)
Kostyuchenko, Evgeny; Lebedeva, Taisiya; Goritov, Alexander
2017-11-01
An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the "Cosmos" category of the Internet encyclopedia "Wikipedia" as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and categories was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.
Graph Theory at the Service of Electroencephalograms.
Iakovidou, Nantia D
2017-04-01
The brain is one of the largest and most complex organs in the human body and EEG is a noninvasive electrophysiological monitoring method that is used to record the electrical activity of the brain. Lately, the functional connectivity in human brain has been regarded and studied as a complex network using EEG signals. This means that the brain is studied as a connected system where nodes, or units, represent different specialized brain regions and links, or connections, represent communication pathways between the nodes. Graph theory and theory of complex networks provide a variety of measures, methods, and tools that can be useful to efficiently model, analyze, and study EEG networks. This article is addressed to computer scientists who wish to be acquainted and deal with the study of EEG data and also to neuroscientists who would like to become familiar with graph theoretic approaches and tools to analyze EEG data.
Dragicevic, Arnaud; Boulanger, Vincent; Bruciamacchie, Max; Chauchard, Sandrine; Dupouey, Jean-Luc; Stenger, Anne
2017-04-21
In order to unveil the value of network connectivity, we formalize the construction of ecological networks in forest environments as an optimal control dynamic graph-theoretic problem. The network is based on a set of bioreserves and patches linked by ecological corridors. The node dynamics, built upon the consensus protocol, form a time evolutive Mahalanobis distance weighted by the opportunity costs of timber production. We consider a case of complete graph, where the ecological network is fully connected, and a case of incomplete graph, where the ecological network is partially connected. The results show that the network equilibrium depends on the size of the reception zone, while the network connectivity depends on the environmental compatibility between the ecological areas. Through shadow prices, we find that securing connectivity in partially connected networks is more expensive than in fully connected networks, but should be undertaken when the opportunity costs are significant. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalable Faceted Ranking in Tagging Systems
NASA Astrophysics Data System (ADS)
Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.
Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.
Visibility Graph Based Time Series Analysis.
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.
Multi-Level Anomaly Detection on Time-Varying Graph Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A; Collins, John P; Ferragut, Erik M
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Detecting black bear source–sink dynamics using individual-based genetic graphs
Draheim, Hope M.; Moore, Jennifer A.; Etter, Dwayne; Winterstein, Scott R.; Scribner, Kim T.
2016-01-01
Source–sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source–sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source–sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source–sink dynamics and their implications on harvest management of game species. PMID:27440668
A Functional Analytic Approach To Computer-Interactive Mathematics
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed. PMID:15898471
Detecting black bear source-sink dynamics using individual-based genetic graphs.
Draheim, Hope M; Moore, Jennifer A; Etter, Dwayne; Winterstein, Scott R; Scribner, Kim T
2016-07-27
Source-sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source-sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source-sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source-sink dynamics and their implications on harvest management of game species. © 2016 The Author(s).
A functional analytic approach to computer-interactive mathematics.
Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M; Ninness, Sharon K
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed.
Local Difference Measures between Complex Networks for Dynamical System Model Evaluation
Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374
Local difference measures between complex networks for dynamical system model evaluation.
Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.
Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model
NASA Technical Reports Server (NTRS)
Segui, John S.; Jennings, Esther H.; Clare, Loren P.
2013-01-01
Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.
Approaches to Linked Open Data at data.oceandrilling.org
NASA Astrophysics Data System (ADS)
Fils, D.
2012-12-01
The data.oceandrilling.org web application applies Linked Open Data (LOD) patterns to expose Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP) and Integrated Ocean Drilling Program (IODP) data. Ocean drilling data is represented in a rich range of data formats: high resolution images, file based data sets and sample based data. This richness of data types has been well met by semantic approaches and will be demonstrated. Data has been extracted from CSV, HTML and RDBMS through custom software and existing packages for loading into a SPARQL 1.1 compliant triple store. Practices have been developed to streamline the maintenance of the RDF graphs and properly expose them using LOD approaches like VoID and HTML embedded structured data. Custom and existing vocabularies are used to allow semantic relations between resources. Use of the W3c draft RDF Data Cube Vocabulary and other approaches for encoding time scales, taxonomic fossil data and other graphs will be shown. A software layer written in Google Go mediates the RDF to web pipeline. The approach used is general and can be applied to other similar environments like node.js or Python Twisted. To facilitate communication user interface software libraries such as D3 and packages such as S2S and LodLive have been used. Additionally OpenSearch API's, structured data in HTML and SPARQL endpoints provide various access methods for applications. The data.oceandrilling.org is not viewed as a web site but as an application that communicate with a range of clients. This approach helps guide the development more along software practices than along web site authoring approaches.
Automatic extraction of protein point mutations using a graph bigram association.
Lee, Lawrence C; Horn, Florence; Cohen, Fred E
2007-02-02
Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram), that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR), tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76), protein tyrosine kinases (0.72 versus 0.69), and ion channel transporters (0.76 versus 0.74). Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.
COST-EFFECTIVE SAMPLING FOR SPATIALLY DISTRIBUTED PHENOMENA
Various measures of sampling plan cost and loss are developed and analyzed as they relate to a variety of multidisciplinary sampling techniques. The sampling choices examined include methods from design-based sampling, model-based sampling, and geostatistics. Graphs and tables ar...
Assortative model for social networks
NASA Astrophysics Data System (ADS)
Catanzaro, Michele; Caldarelli, Guido; Pietronero, Luciano
2004-09-01
In this Brief Report we present a version of a network growth model, generalized in order to describe the behavior of social networks. The case of study considered is the preprint archive at cul.arxiv.org. Each node corresponds to a scientist, and a link is present whenever two authors wrote a paper together. This graph is a nice example of degree-assortative network, that is, to say a network where sites with similar degree are connected to each other. The model presented is one of the few able to reproduce such behavior, giving some insight on the microscopic dynamics at the basis of the graph structure.
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.
1975-01-01
A series of experiments was conducted to determine the dry heat resistance of microorganisms in soil obtained from Denver Colorado, Pasadena California, Kennedy Space Center Florida, and Cincinnati Ohio. The results of the KSC terminal sterilization cycle experiment are given in graphs. The average number of viable organisms per m1 was calculated for 18 replicate soil samples for each sample area and points plotted equivalent to 30 hr exposure at 112 C. The result showed a reduction of 3 logs from the initial population for both KSC and Cincinnati soil samples. Results from other areas are given in graphs.
Graph-based segmentation for RGB-D data using 3-D geometry enhanced superpixels.
Yang, Jingyu; Gan, Ziqiao; Li, Kun; Hou, Chunping
2015-05-01
With the advances of depth sensing technologies, color image plus depth information (referred to as RGB-D data hereafter) is more and more popular for comprehensive description of 3-D scenes. This paper proposes a two-stage segmentation method for RGB-D data: 1) oversegmentation by 3-D geometry enhanced superpixels and 2) graph-based merging with label cost from superpixels. In the oversegmentation stage, 3-D geometrical information is reconstructed from the depth map. Then, a K-means-like clustering method is applied to the RGB-D data for oversegmentation using an 8-D distance metric constructed from both color and 3-D geometrical information. In the merging stage, treating each superpixel as a node, a graph-based model is set up to relabel the superpixels into semantically-coherent segments. In the graph-based model, RGB-D proximity, texture similarity, and boundary continuity are incorporated into the smoothness term to exploit the correlations of neighboring superpixels. To obtain a compact labeling, the label term is designed to penalize labels linking to similar superpixels that likely belong to the same object. Both the proposed 3-D geometry enhanced superpixel clustering method and the graph-based merging method from superpixels are evaluated by qualitative and quantitative results. By the fusion of color and depth information, the proposed method achieves superior segmentation performance over several state-of-the-art algorithms.
Searching social networks for subgraph patterns
NASA Astrophysics Data System (ADS)
Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises
2013-06-01
Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.
Transformations of Mathematical and Stimulus Functions
Ninness, Chris; Barnes-Holmes, Dermot; Rumph, Robin; McCuller, Glen; Ford, Angela M; Payne, Robert; Ninness, Sharon K; Smith, Ronald J; Ward, Todd A; Elliott, Marc P
2006-01-01
Following a pretest, 8 participants who were unfamiliar with algebraic and trigonometric functions received a brief presentation on the rectangular coordinate system. Next, they participated in a computer-interactive matching-to-sample procedure that trained formula-to-formula and formula-to-graph relations. Then, they were exposed to 40 novel formula-to-graph tests and 10 novel graph-to-formula tests. Seven of the 8 participants showed substantial improvement in identifying formula-to-graph relations; however, in the test of novel graph-to-formula relations, participants tended to select equations in their factored form. Next, we manipulated contextual cues in the form of rules regarding mathematical preferences. First, we informed participants that standard forms of equations were preferred over factored forms. In a subsequent test of 10 additional novel graph-to-formula relations, participants shifted their selections to favor equations in their standard form. This preference reversed during 10 more tests when financial reward was made contingent on correct identification of formulas in factored form. Formula preferences and transformation of novel mathematical and stimulus functions are discussed. PMID:17020211
NASA Astrophysics Data System (ADS)
Sharma, Harshita; Zerbe, Norman; Heim, Daniel; Wienert, Stephan; Lohmann, Sebastian; Hellwich, Olaf; Hufnagl, Peter
2016-03-01
This paper describes a novel graph-based method for efficient representation and subsequent classification in histological whole slide images of gastric cancer. Her2/neu immunohistochemically stained and haematoxylin and eosin stained histological sections of gastric carcinoma are digitized. Immunohistochemical staining is used in practice by pathologists to determine extent of malignancy, however, it is laborious to visually discriminate the corresponding malignancy levels in the more commonly used haematoxylin and eosin stain, and this study attempts to solve this problem using a computer-based method. Cell nuclei are first isolated at high magnification using an automatic cell nuclei segmentation strategy, followed by construction of cell nuclei attributed relational graphs of the tissue regions. These graphs represent tissue architecture comprehensively, as they contain information about cell nuclei morphology as vertex attributes, along with knowledge of neighborhood in the form of edge linking and edge attributes. Global graph characteristics are derived and ensemble learning is used to discriminate between three types of malignancy levels, namely, non-tumor, Her2/neu positive tumor and Her2/neu negative tumor. Performance is compared with state of the art methods including four texture feature groups (Haralick, Gabor, Local Binary Patterns and Varma Zisserman features), color and intensity features, and Voronoi diagram and Delaunay triangulation. Texture, color and intensity information is also combined with graph-based knowledge, followed by correlation analysis. Quantitative assessment is performed using two cross validation strategies. On investigating the experimental results, it can be concluded that the proposed method provides a promising way for computer-based analysis of histopathological images of gastric cancer.
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...
2016-01-01
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
CANCER CONTROL AND POPULATION SCIENCES FAST STATS
Fast Stats links to tables, charts, and graphs of cancer statistics for all major cancer sites by age, sex, race, and geographic area. The statistics include incidence, mortality, prevalence, and the probability of developing or dying from cancer. A large set of statistics is ava...
Brain Modulyzer: Interactive Visual Analysis of Functional Brain Connectivity
Murugesan, Sugeerth; Bouchard, Kristopher; Brown, Jesse A.; ...
2016-05-09
Here, we present Brain Modulyzer, an interactive visual exploration tool for functional magnetic resonance imaging (fMRI) brain scans, aimed at analyzing the correlation between different brain regions when resting or when performing mental tasks. Brain Modulyzer combines multiple coordinated views—such as heat maps, node link diagrams, and anatomical views—using brushing and linking to provide an anatomical context for brain connectivity data. Integrating methods from graph theory and analysis, e.g., community detection and derived graph measures, makes it possible to explore the modular and hierarchical organization of functional brain networks. Providing immediate feedback by displaying analysis results instantaneously while changing parametersmore » gives neuroscientists a powerful means to comprehend complex brain structure more effectively and efficiently and supports forming hypotheses that can then be validated via statistical analysis. In order to demonstrate the utility of our tool, we also present two case studies—exploring progressive supranuclear palsy, as well as memory encoding and retrieval« less
Software-defined Quantum Networking Ecosystem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Sadlier, Ronald
The software enables a user to perform modeling and simulation of software-defined quantum networks. The software addresses the problem of how to synchronize transmission of quantum and classical signals through multi-node networks and to demonstrate quantum information protocols such as quantum teleportation. The software approaches this problem by generating a graphical model of the underlying network and attributing properties to each node and link in the graph. The graphical model is then simulated using a combination of discrete-event simulators to calculate the expected state of each node and link in the graph at a future time. A user interacts withmore » the software by providing an initial network model and instantiating methods for the nodes to transmit information with each other. This includes writing application scripts in python that make use of the software library interfaces. A user then initiates the application scripts, which invokes the software simulation. The user then uses the built-in diagnostic tools to query the state of the simulation and to collect statistics on synchronization.« less
Teleradiology costs in a rural area
NASA Astrophysics Data System (ADS)
Chimiak, William J.
1994-05-01
There have been several excellent papers providing architectures for teleradiology. Effective teleradiology systems can be fielded today. However, cost issues arise which easily blur a decision to deploy a teleradiology system for a given hospital or regional hospital system. In this paper, a T1 infrastructure is assumed that is comprised of dedicated T1 links as well as fractional T1 links. The effects of teleconferencing are included in the analysis. Plots of the telecommunication costs provide visualization of the cost and performance issues as a function of varying degrees teleradiology and teleconference utilization. 1993 tariffs in North Carolina will be used as a baseline to arrive at some basic teleradiology cost plots and metrics. The graphs are produced by gnuplot that is freely available on many anonymous ftp sites and runs on Unix workstations as well as personal computers. The plotting commands used for the graphs are available at The Bowman Gray School of Medicine of Wake Forest University anonymous ftp site.
Brain Modulyzer: Interactive Visual Analysis of Functional Brain Connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murugesan, Sugeerth; Bouchard, Kristopher; Brown, Jesse A.
Here, we present Brain Modulyzer, an interactive visual exploration tool for functional magnetic resonance imaging (fMRI) brain scans, aimed at analyzing the correlation between different brain regions when resting or when performing mental tasks. Brain Modulyzer combines multiple coordinated views—such as heat maps, node link diagrams, and anatomical views—using brushing and linking to provide an anatomical context for brain connectivity data. Integrating methods from graph theory and analysis, e.g., community detection and derived graph measures, makes it possible to explore the modular and hierarchical organization of functional brain networks. Providing immediate feedback by displaying analysis results instantaneously while changing parametersmore » gives neuroscientists a powerful means to comprehend complex brain structure more effectively and efficiently and supports forming hypotheses that can then be validated via statistical analysis. In order to demonstrate the utility of our tool, we also present two case studies—exploring progressive supranuclear palsy, as well as memory encoding and retrieval« less
Böhlke, John Karl
2006-01-01
Atmospheric environmental tracers commonly used to date groundwater on timescales of years to decades include CFC-11, CFC-12, CFC-113, SF6, 85Kr, 3 H and 3 H/3 H0 , where 3 H0 refers to initial tritium (3 H + tritiogenic 3 He) (Cook and Herczeg, 2000). Interpretation of age from environmental tracer data may be relatively simple for a water sample with a single age, but the interpretation is more complex for a sample that is a mixture of waters of varying ages. A mixture can be a natural result of convergence of flow lines to a discharge area such as a spring or stream, or it can be an artefact of sampling a long-screen well. TRACERMODEL1 contains a worksheet that can be used to determine hypothetical concentrations of atmospheric environmental tracers in water samples with several different age distributions. It is designed to permit plotting of ages and tracer concentrations in a variety of different combinations to facilitate interpretation of measurements. TRACERMODEL1 includes several different types of graphs that are linked to the calculations. The spreadsheet and accompanying graphs can be modified for specific applications. For example, the selection of atmospheric environmental tracers can be changed to reflect analytes of interest, the input tracer data can be modified to reflect local conditions or different timescales, and the analytes of interest can include other types of non-point-source contaminants, such as nitrate (Böhlke, 2002). Previous versions of this workbook have been used to evaluate field data in studies of groundwater residence time and agricultural contamination (Böhlke and Denver, 1995; Focazio et al., 1998; Katz et al., 1999; Katz et al., 2001; Plummer et al., 2001; Böhlke and Krantz, 2003; Lindsey et al., 2003).
PLACNETw: a web-based tool for plasmid reconstruction from bacterial genomes.
Vielva, Luis; de Toro, María; Lanza, Val F; de la Cruz, Fernando
2017-12-01
PLACNET is a graph-based tool for reconstruction of plasmids from next generation sequence pair-end datasets. PLACNET graphs contain two types of nodes (assembled contigs and reference genomes) and two types of edges (scaffold links and homology to references). Manual pruning of the graphs is a necessary requirement in PLACNET, but this is difficult for users without solid bioinformatic background. PLACNETw, a webtool based on PLACNET, provides an interactive graphic interface, automates BLAST searches, and extracts the relevant information for decision making. It allows a user with domain expertise to visualize the scaffold graphs and related information of contigs as well as reference sequences, so that the pruning operations can be done interactively from a personal computer without the need for additional tools. After successful pruning, each plasmid becomes a separate connected component subgraph. The resulting data are automatically downloaded by the user. PLACNETw is freely available at https://castillo.dicom.unican.es/upload/. delacruz@unican.es. A tutorial video and several solved examples are available at https://castillo.dicom.unican.es/placnetw_video/ and https://castillo.dicom.unican.es/examples/. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Graph analysis of functional brain networks: practical issues in translational neuroscience
De Vico Fallani, Fabrizio; Richiardi, Jonas; Chavez, Mario; Achard, Sophie
2014-01-01
The brain can be regarded as a network: a connected system where nodes, or units, represent different specialized regions and links, or connections, represent communication pathways. From a functional perspective, communication is coded by temporal dependence between the activities of different brain areas. In the last decade, the abstract representation of the brain as a graph has allowed to visualize functional brain networks and describe their non-trivial topological properties in a compact and objective way. Nowadays, the use of graph analysis in translational neuroscience has become essential to quantify brain dysfunctions in terms of aberrant reconfiguration of functional brain networks. Despite its evident impact, graph analysis of functional brain networks is not a simple toolbox that can be blindly applied to brain signals. On the one hand, it requires the know-how of all the methodological steps of the pipeline that manipulate the input brain signals and extract the functional network properties. On the other hand, knowledge of the neural phenomenon under study is required to perform physiologically relevant analysis. The aim of this review is to provide practical indications to make sense of brain network analysis and contrast counterproductive attitudes. PMID:25180301
What can graph theory tell us about word learning and lexical retrieval?
Vitevitch, Michael S
2008-04-01
Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing.
Linear Algebra and Sequential Importance Sampling for Network Reliability
2011-12-01
first test case is an Erdős- Renyi graph with 100 vertices and 150 edges. Figure 1 depicts the relative variance of the three Algorithms: Algorithm TOP...e va ria nc e Figure 1: Relative variance of various algorithms on Erdős Renyi graph, 100 vertices 250 edges. Key: Solid = TOP-DOWN algorithm
Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets
2017-07-01
principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for
ERIC Educational Resources Information Center
Instructional Objectives Exchange, Los Angeles, CA.
To help classroom teachers in grades K-9 construct mathematics tests, fifteen general objectives, corresponding sub-objectives, sample test items, and answers are presented. In general, sub-objectives are arranged in increasing order of difficulty. The objectives were written to comprehensively cover three categories. The first, graphs, covers the…
ERIC Educational Resources Information Center
Zhu, Zheng; Chen, Peijie; Zhuang, Jie
2013-01-01
Purpose: Many ActiGraph accelerometer cutoff points and equations have been developed to classify children and youth's physical activity (PA) into different intensity levels. Using a sample from the Chinese City Children and Youth Physical Activity Study, this study was to develop new ActiGraph cutoff points for moderate-to-vigorous physical…
Pre-incident Analysis using Multigraphs and Faceted Ontologies
2013-08-01
ontology for beverages, part of which is shown in the form of an entity- relationship (ER) graph in Figure 4. The entities Beer , Wine, etc. have is a...another from Beer to Grains. The terminology is suggestive: The is a type of link has already been defined (informally). The made from link...expressions derived from natural language such as Beer , is a, Grains and made from. Labels alone are insufficient for a computer system for ontology and
What Causal Forces Shape Internet Connectivity at the AS-level?
2003-01-01
business “peering relationship .” By focusing on the AS subgraph ASPC whose links repre- sent provider- customer relationships , we present an empirical... customer relationships may be determined in the actual Internet, we develop a new optimization-driven model for Internet growth at the ASPC level...among ASs. Two ASs are connected in an AS graph by a link only if they have a “peering relationship ” between them, e.g., provider- customer or peer-to
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Linked data and provenance in biological data webs.
Zhao, Jun; Miles, Alistair; Klyne, Graham; Shotton, David
2009-03-01
The Web is now being used as a platform for publishing and linking life science data. The Web's linking architecture can be exploited to join heterogeneous data from multiple sources. However, as data are frequently being updated in a decentralized environment, provenance information becomes critical to providing reliable and trustworthy services to scientists. This article presents design patterns for representing and querying provenance information relating to mapping links between heterogeneous data from sources in the domain of functional genomics. We illustrate the use of named resource description framework (RDF) graphs at different levels of granularity to make provenance assertions about linked data, and demonstrate that these assertions are sufficient to support requirements including data currency, integrity, evidential support and historical queries.
Visual Routines for Extracting Magnitude Relations
ERIC Educational Resources Information Center
Michal, Audrey L.; Uttal, David; Shah, Priti; Franconeri, Steven L.
2016-01-01
Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple…
Crocodile Mathematics 1.1. [CD-ROM].
ERIC Educational Resources Information Center
2002
This CD-ROM consists of software that allows both teachers and students to create and experiment with mathematical models by linking shapes, graphs, numbers, and equations. It is usable for demonstrations, home learning, reinforcing concepts, illustrating concepts that are difficult to visualize, further pupil investigations, and project work.…
Was Euclid an Unnecessarily Sophisticated Psychologist?
ERIC Educational Resources Information Center
Arabie, Phipps
1991-01-01
The current state of multidimensional scaling using the city-block metric is reviewed, with attention to (1) substantive and theoretical issues; (2) recent algorithmic developments and their implications for analysis; (3) isometries with other metrics; (4) links to graph-theoretic models; and (5) prospects for future development. (SLD)
Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.
ERIC Educational Resources Information Center
Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick
1999-01-01
Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitrani, J
Bayesian networks (BN) are an excellent tool for modeling uncertainties in systems with several interdependent variables. A BN is a directed acyclic graph, and consists of a structure, or the set of directional links between variables that depend on other variables, and conditional probabilities (CP) for each variable. In this project, we apply BN's to understand uncertainties in NIF ignition experiments. One can represent various physical properties of National Ignition Facility (NIF) capsule implosions as variables in a BN. A dataset containing simulations of NIF capsule implosions was provided. The dataset was generated from a radiation hydrodynamics code, and itmore » contained 120 simulations of 16 variables. Relevant knowledge about the physics of NIF capsule implosions and greedy search algorithms were used to search for hypothetical structures for a BN. Our preliminary results found 6 links between variables in the dataset. However, we thought there should have been more links between the dataset variables based on the physics of NIF capsule implosions. Important reasons for the paucity of links are the relatively small size of the dataset, and the sampling of the values for dataset variables. Another factor that might have caused the paucity of links is the fact that in the dataset, 20% of the simulations represented successful fusion, and 80% didn't, (simulations of unsuccessful fusion are useful for measuring certain diagnostics) which skewed the distributions of several variables, and possibly reduced the number of links. Nevertheless, by illustrating the interdependencies and conditional probabilities of several parameters and diagnostics, an accurate and complete BN built from an appropriate simulation set would provide uncertainty quantification for NIF capsule implosions.« less
GenomeGraphs: integrated genomic data visualization with R.
Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine
2009-01-06
Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.
Mutimbu, Lawrence; Robles-Kelly, Antonio
2016-08-31
This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.
ERIC Educational Resources Information Center
Chan, Siu Y.
2001-01-01
Discussion of information overload focuses on a study of masters degree students at a Hong Kong university that investigated the effectiveness of graphs as decision aids to reduce adverse effects of information overload on decision quality. Results of a simulation of a business prediction task with a sample of business managers are presented.…
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
NASA Astrophysics Data System (ADS)
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
Technology Tips: Fool Me Twice, Shame on Me
ERIC Educational Resources Information Center
Stohl, Hollylynne; Harper, Suzanne R.
2005-01-01
Todd Lee and colleagues share some of the common technology "pranks" and peculiarities from the three most common technology tools used in our classrooms: Microsoft Excel, graphing calculators, and The Geometer's Sketchpad. The "Surfing Note" includes a link to a collection of funny math cartoons from the Carolina Biological Supply Company.
Linking Models: Reasoning from Patterns to Tables and Equations
ERIC Educational Resources Information Center
Switzer, J. Matt
2013-01-01
Patterns are commonly used in middle years mathematics classrooms to teach students about functions and modelling with tables, graphs, and equations. Grade 6 students are expected to, "continue and create sequences involving whole numbers, fractions and decimals," and "describe the rule used to create the sequence." (Australian…
Time reversibility from visibility graphs of nonstationary processes
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Flanagan, Ryan
2015-08-01
Visibility algorithms are a family of methods to map time series into networks, with the aim of describing the structure of time series and their underlying dynamical properties in graph-theoretical terms. Here we explore some properties of both natural and horizontal visibility graphs associated to several nonstationary processes, and we pay particular attention to their capacity to assess time irreversibility. Nonstationary signals are (infinitely) irreversible by definition (independently of whether the process is Markovian or producing entropy at a positive rate), and thus the link between entropy production and time series irreversibility has only been explored in nonequilibrium stationary states. Here we show that the visibility formalism naturally induces a new working definition of time irreversibility, which allows us to quantify several degrees of irreversibility for stationary and nonstationary series, yielding finite values that can be used to efficiently assess the presence of memory and off-equilibrium dynamics in nonstationary processes without the need to differentiate or detrend them. We provide rigorous results complemented by extensive numerical simulations on several classes of stochastic processes.
Visibility Graph Based Time Series Analysis
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115
NASA Technical Reports Server (NTRS)
Bokhari, S. H.; Raza, A. D.
1984-01-01
Three methods of augmenting computer networks by adding at most one link per processor are discussed: (1) A tree of N nodes may be augmented such that the resulting graph has diameter no greater than 4log sub 2((N+2)/3)-2. Thi O(N(3)) algorithm can be applied to any spanning tree of a connected graph to reduce the diameter of that graph to O(log N); (2) Given a binary tree T and a chain C of N nodes each, C may be augmented to produce C so that T is a subgraph of C. This algorithm is O(N) and may be used to produce augmented chains or rings that have diameter no greater than 2log sub 2((N+2)/3) and are planar; (3) Any rectangular two-dimensional 4 (8) nearest neighbor array of size N = 2(k) may be augmented so that it can emulate a single step shuffle-exchange network of size N/2 in 3(t) time steps.
Role models for complex networks
NASA Astrophysics Data System (ADS)
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
A hierarchical graph neuron scheme for real-time pattern recognition.
Nasution, B B; Khan, A I
2008-02-01
The hierarchical graph neuron (HGN) implements a single cycle memorization and recall operation through a novel algorithmic design. The HGN is an improvement on the already published original graph neuron (GN) algorithm. In this improved approach, it recognizes incomplete/noisy patterns. It also resolves the crosstalk problem, which is identified in the previous publications, within closely matched patterns. To accomplish this, the HGN links multiple GN networks for filtering noise and crosstalk out of pattern data inputs. Intrinsically, the HGN is a lightweight in-network processing algorithm which does not require expensive floating point computations; hence, it is very suitable for real-time applications and tiny devices such as the wireless sensor networks. This paper describes that the HGN's pattern matching capability and the small response time remain insensitive to the increases in the number of stored patterns. Moreover, the HGN does not require definition of rules or setting of thresholds by the operator to achieve the desired results nor does it require heuristics entailing iterative operations for memorization and recall of patterns.
Analysing the connectivity and communication of suicidal users on twitter
Colombo, Gualtiero B.; Burnap, Pete; Hodorog, Andrei; Scourfield, Jonathan
2016-01-01
In this paper we aim to understand the connectivity and communication characteristics of Twitter users who post content subsequently classified by human annotators as containing possible suicidal intent or thinking, commonly referred to as suicidal ideation. We achieve this understanding by analysing the characteristics of their social networks. Starting from a set of human annotated Tweets we retrieved the authors’ followers and friends lists, and identified users who retweeted the suicidal content. We subsequently built the social network graphs. Our results show a high degree of reciprocal connectivity between the authors of suicidal content when compared to other studies of Twitter users, suggesting a tightly-coupled virtual community. In addition, an analysis of the retweet graph has identified bridge nodes and hub nodes connecting users posting suicidal ideation with users who were not, thus suggesting a potential for information cascade and risk of a possible contagion effect. This is particularly emphasised by considering the combined graph merging friendship and retweeting links. PMID:26973360
Anderson, Anita L.; Campbell, David L.; Beanland, Shay
2001-01-01
Individual mine waste samples were collected and combined to form one composite sample at each of eight mine dump sites in Colorado and New Mexico. The samples were air-dried and sieved to determine the geochemical composition of their <2mm size fraction. Splits of the samples were then rehydrated and their electrical properties were measured in the US Geological Survey Petrophysical Laboratory, Denver, Colorado (PetLab). The PetLab measurements were done twice: in 1999, using convenient amounts of rehydration water ranging from 5% to 8%; and in 2000, using carefully controlled rehydrations to 5% and 10% water. This report gives geochemical analyses of the <2mm size fraction of the composite samples (Appendix A), PetLab graphs of the 1999 measurements (Appendix B), Petlab graphs of the 2000 measurements (Appendix C), and Cole-Cole models of the PetLab data from the 2000 measurements (Appendix D).
Large-scale automated histology in the pursuit of connectomes.
Kleinfeld, David; Bharioke, Arjun; Blinder, Pablo; Bock, Davi D; Briggman, Kevin L; Chklovskii, Dmitri B; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P; Lee, Wei-Chung Allen; Meyer, Hanno S; Micheva, Kristina D; Oberlaender, Marcel; Prohaska, Steffen; Reid, R Clay; Smith, Stephen J; Takemura, Shinya; Tsai, Philbert S; Sakmann, Bert
2011-11-09
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
Large-Scale Automated Histology in the Pursuit of Connectomes
Bharioke, Arjun; Blinder, Pablo; Bock, Davi D.; Briggman, Kevin L.; Chklovskii, Dmitri B.; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P.; Lee, Wei-Chung Allen; Meyer, Hanno S.; Micheva, Kristina D.; Oberlaender, Marcel; Prohaska, Steffen; Reid, R. Clay; Smith, Stephen J.; Takemura, Shinya; Tsai, Philbert S.; Sakmann, Bert
2011-01-01
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity. PMID:22072665
From brain topography to brain topology: relevance of graph theory to functional neuroscience.
Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis
2013-07-10
Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.
Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.
Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu
2016-11-01
In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.
F-RAG: Generating Atomic Coordinates from RNA Graphs by Fragment Assembly.
Jain, Swati; Schlick, Tamar
2017-11-24
Coarse-grained models represent attractive approaches to analyze and simulate ribonucleic acid (RNA) molecules, for example, for structure prediction and design, as they simplify the RNA structure to reduce the conformational search space. Our structure prediction protocol RAGTOP (RNA-As-Graphs Topology Prediction) represents RNA structures as tree graphs and samples graph topologies to produce candidate graphs. However, for a more detailed study and analysis, construction of atomic from coarse-grained models is required. Here we present our graph-based fragment assembly algorithm (F-RAG) to convert candidate three-dimensional (3D) tree graph models, produced by RAGTOP into atomic structures. We use our related RAG-3D utilities to partition graphs into subgraphs and search for structurally similar atomic fragments in a data set of RNA 3D structures. The fragments are edited and superimposed using common residues, full atomic models are scored using RAGTOP's knowledge-based potential, and geometries of top scoring models is optimized. To evaluate our models, we assess all-atom RMSDs and Interaction Network Fidelity (a measure of residue interactions) with respect to experimentally solved structures and compare our results to other fragment assembly programs. For a set of 50 RNA structures, we obtain atomic models with reasonable geometries and interactions, particularly good for RNAs containing junctions. Additional improvements to our protocol and databases are outlined. These results provide a good foundation for further work on RNA structure prediction and design applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Pin; Liang, Yanmei; Chang, Shengjiang; Fan, Hailun
2013-08-01
Accurate segmentation of renal tissues in abdominal computed tomography (CT) image sequences is an indispensable step for computer-aided diagnosis and pathology detection in clinical applications. In this study, the goal is to develop a radiology tool to extract renal tissues in CT sequences for the management of renal diagnosis and treatments. In this paper, the authors propose a new graph-cuts-based active contours model with an adaptive width of narrow band for kidney extraction in CT image sequences. Based on graph cuts and contextual continuity, the segmentation is carried out slice-by-slice. In the first stage, the middle two adjacent slices in a CT sequence are segmented interactively based on the graph cuts approach. Subsequently, the deformable contour evolves toward the renal boundaries by the proposed model for the kidney extraction of the remaining slices. In this model, the energy function combining boundary with regional information is optimized in the constructed graph and the adaptive search range is determined by contextual continuity and the object size. In addition, in order to reduce the complexity of the min-cut computation, the nodes in the graph only have n-links for fewer edges. The total 30 CT images sequences with normal and pathological renal tissues are used to evaluate the accuracy and effectiveness of our method. The experimental results reveal that the average dice similarity coefficient of these image sequences is from 92.37% to 95.71% and the corresponding standard deviation for each dataset is from 2.18% to 3.87%. In addition, the average automatic segmentation time for one kidney in each slice is about 0.36 s. Integrating the graph-cuts-based active contours model with contextual continuity, the algorithm takes advantages of energy minimization and the characteristics of image sequences. The proposed method achieves effective results for kidney segmentation in CT sequences.
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Granda, Jose J.
2003-01-01
Conceptually, modeling of flexible, multi-body systems involves a formulation as a set of time-dependent partial differential equations. However, for practical, engineering purposes, this modeling is usually done using the method of Finite Elements, which approximates the set of partial differential equations, thus generalizing the approach to all continuous media. This research investigates the links between the Bond Graph method and the classical methods used to develop system models and advocates the Bond Graph Methodology and current bond graph tools as alternate approaches that will lead to a quick and precise understanding of a flexible multi-body system under automatic control. For long endurance, complex spacecraft, because of articulation and mission evolution the model of the physical system may change frequently. So a method of automatic generation and regeneration of system models that does not lead to implicit equations, as does the Lagrange equation approach, is desirable. The bond graph method has been shown to be amenable to automatic generation of equations with appropriate consideration of causality. Indeed human-interactive software now exists that automatically generates both symbolic and numeric system models and evaluates causality as the user develops the model, e.g. the CAMP-G software package. In this paper the CAMP-G package is used to generate a bond graph model of the International Space Station (ISS) at an early stage in its assembly, Zvezda. The ISS is an ideal example because it is a collection of bodies that are articulated, many of which are highly flexible. Also many reaction jets are used to control translation and attitude, and many electric motors are used to articulate appendages, which consist of photovoltaic arrays and composite assemblies. The Zvezda bond graph model is compared to an existing model, which was generated by the NASA Johnson Space Center during the Verification and Analysis Cycle of Zvezda.
Surface-Water Conditions in Georgia, Water Year 2005
Painter, Jaime A.; Landers, Mark N.
2007-01-01
INTRODUCTION The U.S. Geological Survey (USGS) Georgia Water Science Center-in cooperation with Federal, State, and local agencies-collected surface-water streamflow, water-quality, and ecological data during the 2005 Water Year (October 1, 2004-September 30, 2005). These data were compiled into layers of an interactive ArcReaderTM published map document (pmf). ArcReaderTM is a product of Environmental Systems Research Institute, Inc (ESRI?). Datasets represented on the interactive map are * continuous daily mean streamflow * continuous daily mean water levels * continuous daily total precipitation * continuous daily water quality (water temperature, specific conductance dissolved oxygen, pH, and turbidity) * noncontinuous peak streamflow * miscellaneous streamflow measurements * lake or reservoir elevation * periodic surface-water quality * periodic ecological data * historical continuous daily mean streamflow discontinued prior to the 2005 water year The map interface provides the ability to identify a station in spatial reference to the political boundaries of the State of Georgia and other features-such as major streams, major roads, and other collection stations. Each station is hyperlinked to a station summary showing seasonal and annual stream characteristics for the current year and for the period of record. For continuous discharge stations, the station summary includes a one page graphical summary page containing five graphs, a station map, and a photograph of the station. The graphs provide a quick overview of the current and period-of-record hydrologic conditions of the station by providing a daily mean discharge graph for the water year, monthly statistics graph for the water year and period of record, an annual mean streamflow graph for the period of record, an annual minimum 7-day average streamflow graph for the period of record, and an annual peak streamflow graph for the period of record. Additionally, data can be accessed through the layer's link to the National Water Inventory System Web (NWISWeb) Interface.
Leung, S C; Fung, W K; Wong, K H
1999-01-01
The relative bit density variation graphs of 207 specimen credit cards processed by 12 encoding machines were examined first visually, and then classified by means of hierarchical cluster analysis. Twenty-nine credit cards being treated as 'questioned' samples were tested by way of cluster analysis against 'controls' derived from known encoders. It was found that hierarchical cluster analysis provided a high accuracy of identification with all 29 'questioned' samples classified correctly. On the other hand, although visual comparison of jitter graphs was less discriminating, it was nevertheless capable of giving a reasonably accurate result.
Graph mining for next generation sequencing: leveraging the assembly graph for biological insights.
Warnke-Sommer, Julia; Ali, Hesham
2016-05-06
The assembly of Next Generation Sequencing (NGS) reads remains a challenging task. This is especially true for the assembly of metagenomics data that originate from environmental samples potentially containing hundreds to thousands of unique species. The principle objective of current assembly tools is to assemble NGS reads into contiguous stretches of sequence called contigs while maximizing for both accuracy and contig length. The end goal of this process is to produce longer contigs with the major focus being on assembly only. Sequence read assembly is an aggregative process, during which read overlap relationship information is lost as reads are merged into longer sequences or contigs. The assembly graph is information rich and capable of capturing the genomic architecture of an input read data set. We have developed a novel hybrid graph in which nodes represent sequence regions at different levels of granularity. This model, utilized in the assembly and analysis pipeline Focus, presents a concise yet feature rich view of a given input data set, allowing for the extraction of biologically relevant graph structures for graph mining purposes. Focus was used to create hybrid graphs to model metagenomics data sets obtained from the gut microbiomes of five individuals with Crohn's disease and eight healthy individuals. Repetitive and mobile genetic elements are found to be associated with hybrid graph structure. Using graph mining techniques, a comparative study of the Crohn's disease and healthy data sets was conducted with focus on antibiotics resistance genes associated with transposase genes. Results demonstrated significant differences in the phylogenetic distribution of categories of antibiotics resistance genes in the healthy and diseased patients. Focus was also evaluated as a pure assembly tool and produced excellent results when compared against the Meta-velvet, Omega, and UD-IDBA assemblers. Mining the hybrid graph can reveal biological phenomena captured by its structure. We demonstrate the advantages of considering assembly graphs as data-mining support in addition to their role as frameworks for assembly.
Computer-Based Mathematics Instructions for Engineering Students
NASA Technical Reports Server (NTRS)
Khan, Mustaq A.; Wall, Curtiss E.
1996-01-01
Almost every engineering course involves mathematics in one form or another. The analytical process of developing mathematical models is very important for engineering students. However, the computational process involved in the solution of some mathematical problems may be very tedious and time consuming. There is a significant amount of mathematical software such as Mathematica, Mathcad, and Maple designed to aid in the solution of these instructional problems. The use of these packages in classroom teaching can greatly enhance understanding, and save time. Integration of computer technology in mathematics classes, without de-emphasizing the traditional analytical aspects of teaching, has proven very successful and is becoming almost essential. Sample computer laboratory modules are developed for presentation in the classroom setting. This is accomplished through the use of overhead projectors linked to graphing calculators and computers. Model problems are carefully selected from different areas.
What Can Graph Theory Tell Us About Word Learning and Lexical Retrieval?
Vitevitch, Michael S.
2008-01-01
Purpose Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Method Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. Results The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. Conclusions The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing. PMID:18367686
Social and place-focused communities in location-based online social networks
NASA Astrophysics Data System (ADS)
Brown, Chloë; Nicosia, Vincenzo; Scellato, Salvatore; Noulas, Anastasios; Mascolo, Cecilia
2013-06-01
Thanks to widely available, cheap Internet access and the ubiquity of smartphones, millions of people around the world now use online location-based social networking services. Understanding the structural properties of these systems and their dependence upon users' habits and mobility has many potential applications, including resource recommendation and link prediction. Here, we construct and characterise social and place-focused graphs by using longitudinal information about declared social relationships and about users' visits to physical places collected from a popular online location-based social service. We show that although the social and place-focused graphs are constructed from the same data set, they have quite different structural properties. We find that the social and location-focused graphs have different global and meso-scale structure, and in particular that social and place-focused communities have negligible overlap. Consequently, group inference based on community detection performed on the social graph alone fails to isolate place-focused groups, even though these do exist in the network. By studying the evolution of tie structure within communities, we show that the time period over which location data are aggregated has a substantial impact on the stability of place-focused communities, and that information about place-based groups may be more useful for user-centric applications than that obtained from the analysis of social communities alone.
Functional Organization of the Action Observation Network in Autism: A Graph Theory Approach.
Alaerts, Kaat; Geerlings, Franca; Herremans, Lynn; Swinnen, Stephan P; Verhoeven, Judith; Sunaert, Stefan; Wenderoth, Nicole
2015-01-01
The ability to recognize, understand and interpret other's actions and emotions has been linked to the mirror system or action-observation-network (AON). Although variations in these abilities are prevalent in the neuro-typical population, persons diagnosed with autism spectrum disorders (ASD) have deficits in the social domain and exhibit alterations in this neural network. Here, we examined functional network properties of the AON using graph theory measures and region-to-region functional connectivity analyses of resting-state fMRI-data from adolescents and young adults with ASD and typical controls (TC). Overall, our graph theory analyses provided convergent evidence that the network integrity of the AON is altered in ASD, and that reductions in network efficiency relate to reductions in overall network density (i.e., decreased overall connection strength). Compared to TC, individuals with ASD showed significant reductions in network efficiency and increased shortest path lengths and centrality. Importantly, when adjusting for overall differences in network density between ASD and TC groups, participants with ASD continued to display reductions in network integrity, suggesting that also network-level organizational properties of the AON are altered in ASD. While differences in empirical connectivity contributed to reductions in network integrity, graph theoretical analyses provided indications that also changes in the high-level network organization reduced integrity of the AON.
Model validation of simple-graph representations of metabolism
Holme, Petter
2009-01-01
The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012
GraphCrunch 2: Software tool for network modeling, alignment and clustering.
Kuchaiev, Oleksii; Stevanović, Aleksandar; Hayes, Wayne; Pržulj, Nataša
2011-01-19
Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, GraphCruch 2 implements an algorithm for clustering nodes within a network based solely on their topological similarities. Using GraphCrunch 2, we demonstrate that eukaryotic and viral PPI networks may belong to different graph model families and show that topology-based clustering can reveal important functional similarities between proteins within yeast and human PPI networks. GraphCrunch 2 is a software tool that implements the latest research on biological network analysis. It parallelizes computationally intensive tasks to fully utilize the potential of modern multi-core CPUs. It is open-source and freely available for research use. It runs under the Windows and Linux platforms.
Digitizing the Past: A History Book on CD-ROM.
ERIC Educational Resources Information Center
Rosenzweig, Roy
1993-01-01
Describes the development of an American history book with interactive CD-ROM technology that includes text, pictures, graphs and charts, audio, and film. Topics discussed include the use of HyperCard software to link information; access to primary sources of information; greater student control over learning; and the concept of collaborative…
Derivative, Maxima and Minima in a Graphical Context
ERIC Educational Resources Information Center
Rivera-Figueroa, Antonio; Ponce-Campuzano, Juan Carlos
2013-01-01
A deeper learning of the properties and applications of the derivative for the study of functions may be achieved when teachers present lessons within a highly graphic context, linking the geometric illustrations to formal proofs. Each concept is better understood and more easily retained when it is presented and explained visually using graphs.…
Developing Creativity and Abstraction in Representing Data
ERIC Educational Resources Information Center
South, Andy
2012-01-01
Creating charts and graphs is all about visual abstraction: the process of representing aspects of data with imagery that can be interpreted by the reader. Children may need help making the link between the "real" and the image. This abstraction can be achieved using symbols, size, colour and position. Where the representation is close to what…
Inertial Navigation: A Bridge between Kinematics and Calculus
ERIC Educational Resources Information Center
Sadler, Philip M.; Garfield, Eliza N.; Tremblay, Alex; Sadler, Daniel J.
2012-01-01
Those who come to Cambridge soon learn that the fastest route between Harvard and MIT is by the subway. For many students, this short ride is a quick and easy way to link physics and calculus. A simple, homemade accelerometer provides all the instrumentation necessary to produce accurate graphs of acceleration, velocity, and displacement position…
Research-Based Worksheets on Using Multiple Representations in Science Classrooms
ERIC Educational Resources Information Center
Hill, Matthew; Sharma, Manjula
2015-01-01
The ability to represent the world like a scientist is difficult to teach; it is more than simply knowing the representations (e.g., graphs, words, equations and diagrams). For meaningful science learning to take place, consideration needs to be given to explicitly integrating representations into instructional methods, linked to the content, and…
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
Finite plateau in spectral gap of polychromatic constrained random networks
NASA Astrophysics Data System (ADS)
Avetisov, V.; Gorsky, A.; Nechaev, S.; Valba, O.
2017-12-01
We consider critical behavior in the ensemble of polychromatic Erdős-Rényi networks and regular random graphs, where network vertices are painted in different colors. The links can be randomly removed and added to the network subject to the condition of the vertex degree conservation. In these constrained graphs we run the Metropolis procedure, which favors the connected unicolor triads of nodes. Changing the chemical potential, μ , of such triads, for some wide region of μ , we find the formation of a finite plateau in the number of intercolor links, which exactly matches the finite plateau in the network algebraic connectivity (the value of the first nonvanishing eigenvalue of the Laplacian matrix, λ2). We claim that at the plateau the spontaneously broken Z2 symmetry is restored by the mechanism of modes collectivization in clusters of different colors. The phenomena of a finite plateau formation holds also for polychromatic networks with M ≥2 colors. The behavior of polychromatic networks is analyzed via the spectral properties of their adjacency and Laplacian matrices.
Gomez, Carlos; Poza, Jesus; Gomez-Pilar, Javier; Bachiller, Alejandro; Juan-Cruz, Celia; Tola-Arribas, Miguel A; Carreres, Alicia; Cano, Monica; Hornero, Roberto
2016-08-01
The aim of this pilot study was to analyze spontaneous electroencephalography (EEG) activity in Alzheimer's disease (AD) by means of Cross-Sample Entropy (Cross-SampEn) and two local measures derived from graph theory: clustering coefficient (CC) and characteristic path length (PL). Five minutes of EEG activity were recorded from 37 patients with dementia due to AD and 29 elderly controls. Our results showed that Cross-SampEn values were lower in the AD group than in the control one for all the interactions among EEG channels. This finding indicates that EEG activity in AD is characterized by a lower statistical dissimilarity among channels. Significant differences were found mainly for fronto-central interactions (p <; 0.01, permutation test). Additionally, the application of graph theory measures revealed diverse neural network changes, i.e. lower CC and higher PL values in AD group, leading to a less efficient brain organization. This study suggests the usefulness of our approach to provide further insights into the underlying brain dynamics associated with AD.
NASA Astrophysics Data System (ADS)
Afanasyev, Andrey
2017-04-01
Numerical modelling of multiphase flows in porous medium is necessary in many applications concerning subsurface utilization. An incomplete list of those applications includes oil and gas fields exploration, underground carbon dioxide storage and geothermal energy production. The numerical simulations are conducted using complicated computer programs called reservoir simulators. A robust simulator should include a wide range of modelling options covering various exploration techniques, rock and fluid properties, and geological settings. In this work we present a recent development of new options in MUFITS code [1]. The first option concerns modelling of multiphase flows in double-porosity double-permeability reservoirs. We describe internal representation of reservoir models in MUFITS, which are constructed as a 3D graph of grid blocks, pipe segments, interfaces, etc. In case of double porosity reservoir, two linked nodes of the graph correspond to a grid cell. We simulate the 6th SPE comparative problem [2] and a five-spot geothermal production problem to validate the option. The second option concerns modelling of flows in porous medium coupled with flows in horizontal wells that are represented in the 3D graph as a sequence of pipe segments linked with pipe junctions. The well completions link the pipe segments with reservoir. The hydraulics in the wellbore, i.e. the frictional pressure drop, is calculated in accordance with Haaland's formula. We validate the option against the 7th SPE comparative problem [3]. We acknowledge financial support by the Russian Foundation for Basic Research (project No RFBR-15-31-20585). References [1] Afanasyev, A. MUFITS Reservoir Simulation Software (www.mufits.imec.msu.ru). [2] Firoozabadi A. et al. Sixth SPE Comparative Solution Project: Dual-Porosity Simulators // J. Petrol. Tech. 1990. V.42. N.6. P.710-715. [3] Nghiem L., et al. Seventh SPE Comparative Solution Project: Modelling of Horizontal Wells in Reservoir Simulation // SPE Symp. Res. Sim., 1991. DOI: 10.2118/21221-MS.
Neighborhood graph and learning discriminative distance functions for clinical decision support.
Tsymbal, Alexey; Zhou, Shaohua Kevin; Huber, Martin
2009-01-01
There are two essential reasons for the slow progress in the acceptance of clinical case retrieval and similarity search-based decision support systems; the especial complexity of clinical data making it difficult to define a meaningful and effective distance function on them and the lack of transparency and explanation ability in many existing clinical case retrieval decision support systems. In this paper, we try to address these two problems by introducing a novel technique for visualizing inter-patient similarity based on a node-link representation with neighborhood graphs and by considering two techniques for learning discriminative distance function that help to combine the power of strong "black box" learners with the transparency of case retrieval and nearest neighbor classification.
Quantification of network structural dissimilarities.
Schieber, Tiago A; Carpi, Laura; Díaz-Guilera, Albert; Pardalos, Panos M; Masoller, Cristina; Ravetti, Martín G
2017-01-09
Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.
NASA Astrophysics Data System (ADS)
Moissinac, Henri; Maitre, Henri; Bloch, Isabelle
1995-11-01
An image interpretation method is presented for the automatic processing of aerial pictures of a urban landscape. In order to improve the picture analysis, some a priori knowledge extracted from a geographic map is introduced. A coherent graph-based model of the city is built, starting with the road network. A global uncertainty management scheme has been designed in order to evaluate the final confidence we can have in the final results. This model and the uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels. The symbolic relationships linking the different kinds of elements are taken into account while propagating and combining the confidence measures along the interpretation process.
A topo-graph model for indistinct target boundary definition from anatomical images.
Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Gong, Guanzhong; Eberl, Stefan; Yin, Yong; Wang, Lisheng; Feng, Dagan; Fulham, Michael
2018-06-01
It can be challenging to delineate the target object in anatomical imaging when the object boundaries are difficult to discern due to the low contrast or overlapping intensity distributions from adjacent tissues. We propose a topo-graph model to address this issue. The first step is to extract a topographic representation that reflects multiple levels of topographic information in an input image. We then define two types of node connections - nesting branches (NBs) and geodesic edges (GEs). NBs connect nodes corresponding to initial topographic regions and GEs link the nodes at a detailed level. The weights for NBs are defined to measure the similarity of regional appearance, and weights for GEs are defined with geodesic and local constraints. NBs contribute to the separation of topographic regions and the GEs assist the delineation of uncertain boundaries. Final segmentation is achieved by calculating the relevance of the unlabeled nodes to the labels by the optimization of a graph-based energy function. We test our model on 47 low contrast CT studies of patients with non-small cell lung cancer (NSCLC), 10 contrast-enhanced CT liver cases and 50 breast and abdominal ultrasound images. The validation criteria are the Dice's similarity coefficient and the Hausdorff distance. Student's t-test show that our model outperformed the graph models with pixel-only, pixel and regional, neighboring and radial connections (p-values <0.05). Our findings show that the topographic representation and topo-graph model provides improved delineation and separation of objects from adjacent tissues compared to the tested models. Copyright © 2018 Elsevier B.V. All rights reserved.
Using graph theory to quantify coarse sediment connectivity in alpine geosystems
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Thiel, Markus; Schwanghart, Wolfgang; Haas, Florian; Becht, Michael
2010-05-01
Networks are a common object of study in various disciplines. Among others, informatics, sociology, transportation science, economics and ecology frequently deal with objects which are linked with other objects to form a network. Despite this wide thematic range, a coherent formal basis to represent, measure and model the relational structure of models exists. The mathematical model for networks of all kinds is a graph which can be analysed using the tools of mathematical graph theory. In a graph model of a generic system, system components are represented by graph nodes, and the linkages between them are formed by graph edges. The latter may represent all kinds of linkages, from matter or energy fluxes to functional relations. To some extent, graph theory has been used in geosciences and related disciplines; in hydrology and fluvial geomorphology, for example, river networks have been modeled and analysed as graphs. An important issue in hydrology is the hydrological connectivity which determines if runoff generated on some area reaches the channel network. In ecology, a number of graph-theoretical indices is applicable to describing the influence of habitat distribution and landscape fragmentation on population structure and species mobility. In these examples, the mobility of matter (water, sediment, animals) through a system is an important consequence of system structure, i.e. the location and topology of its components as well as of properties of linkages between them. In geomorphology, sediment connectivity relates to the potential of sediment particles to move through the catchment. As a system property, connectivity depends, for example, on the degree to which hillslopes within a catchment are coupled to the channel system (lateral coupling), and to which channel reaches are coupled to each other (longitudinal coupling). In the present study, numerical GIS-based models are used to investigate the coupling of geomorphic process units by delineating the process domains of important geomorphic processes in a high-mountain environment (rockfall, slope-type debris flows, slope aquatic and fluvial processes). The results are validated by field mapping; they show that only small parts of a catchment are actually coupled to its outlet with respect to coarse (bedload) sediment. The models not only generate maps of the spatial extent and geomorphic activity of the aforementioned processes, they also output so-called edge lists that can be converted to adjacency matrices and graphs. Graph theory is then employed to explore ‘local' (i.e. referring to single nodes or edges) and ‘global' (i.e. system-wide, referring to the whole graph) measures that can be used to quantify coarse sediment connectivity. Such a quantification will complement the mainly qualitative appraisal of coupling and connectivity; the effect of connectivity on catchment properties such as specific sediment yield and catchment sensitivity will then be studied on the basis of quantitative measures.
Most recent common ancestor probability distributions in gene genealogies under selection.
Slade, P F
2000-12-01
A computational study is made of the conditional probability distribution for the allelic type of the most recent common ancestor in genealogies of samples of n genes drawn from a population under selection, given the initial sample configuration. Comparisons with the corresponding unconditional cases are presented. Such unconditional distributions differ from samples drawn from the unique stationary distribution of population allelic frequencies, known as Wright's formula, and are quantified. Biallelic haploid and diploid models are considered. A simplified structure for the ancestral selection graph of S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237) is enhanced further, reducing the effective branching rate in the graph. This improves efficiency of such a nonneutral analogue of the coalescent for use with computational likelihood-inference techniques.
Combinatorial Statistics on Trees and Networks
2010-09-29
interaction graph is drawn from the Erdos- Renyi , G(n,p), where each edge is present independently with probability p. For this model we establish a double...special interest is the behavior of Gibbs sampling on the Erdos- Renyi random graph G{n, d/n), where each edge is chosen independently with...which have no counterparts in the coloring setting. Our proof presented here exploits in novel ways the local treelike structure of Erdos- Renyi
Poodat, Fatemeh; Arrowsmith, Colin; Fraser, David; Gordon, Ascelin
2015-09-01
Connectivity among fragmented areas of habitat has long been acknowledged as important for the viability of biological conservation, especially within highly modified landscapes. Identifying important habitat patches in ecological connectivity is a priority for many conservation strategies, and the application of 'graph theory' has been shown to provide useful information on connectivity. Despite the large number of metrics for connectivity derived from graph theory, only a small number have been compared in terms of the importance they assign to nodes in a network. This paper presents a study that aims to define a new set of metrics and compares these with traditional graph-based metrics, used in the prioritization of habitat patches for ecological connectivity. The metrics measured consist of "topological" metrics, "ecological metrics," and "integrated metrics," Integrated metrics are a combination of topological and ecological metrics. Eight metrics were applied to the habitat network for the fat-tailed dunnart within Greater Melbourne, Australia. A non-directional network was developed in which nodes were linked to adjacent nodes. These links were then weighted by the effective distance between patches. By applying each of the eight metrics for the study network, nodes were ranked according to their contribution to the overall network connectivity. The structured comparison revealed the similarity and differences in the way the habitat for the fat-tailed dunnart was ranked based on different classes of metrics. Due to the differences in the way the metrics operate, a suitable metric should be chosen that best meets the objectives established by the decision maker.
Graph Theoretical Framework of Brain Networks in Multiple Sclerosis: A Review of Concepts.
Fleischer, Vinzenz; Radetz, Angela; Ciolac, Dumitru; Muthuraman, Muthuraman; Gonzalez-Escamilla, Gabriel; Zipp, Frauke; Groppa, Sergiu
2017-11-01
Network science provides powerful access to essential organizational principles of the human brain. It has been applied in combination with graph theory to characterize brain connectivity patterns. In multiple sclerosis (MS), analysis of the brain networks derived from either structural or functional imaging provides new insights into pathological processes within the gray and white matter. Beyond focal lesions and diffuse tissue damage, network connectivity patterns could be important for closely tracking and predicting the disease course. In this review, we describe concepts of graph theory, highlight novel issues of tissue reorganization in acute and chronic neuroinflammation and address pitfalls with regard to network analysis in MS patients. We further provide an outline of functional and structural connectivity patterns observed in MS, spanning from disconnection and disruption on one hand to adaptation and compensation on the other. Moreover, we link network changes and their relation to clinical disability based on the current literature. Finally, we discuss the perspective of network science in MS for future research and postulate its role in the clinical framework. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Finding the Optimal Nets for Self-Folding Kirigami
NASA Astrophysics Data System (ADS)
Araújo, N. A. M.; da Costa, R. A.; Dorogovtsev, S. N.; Mendes, J. F. F.
2018-05-01
Three-dimensional shells can be synthesized from the spontaneous self-folding of two-dimensional templates of interconnected panels, called nets. However, some nets are more likely to self-fold into the desired shell under random movements. The optimal nets are the ones that maximize the number of vertex connections, i.e., vertices that have only two of its faces cut away from each other in the net. Previous methods for finding such nets are based on random search, and thus, they do not guarantee the optimal solution. Here, we propose a deterministic procedure. We map the connectivity of the shell into a shell graph, where the nodes and links of the graph represent the vertices and edges of the shell, respectively. Identifying the nets that maximize the number of vertex connections corresponds to finding the set of maximum leaf spanning trees of the shell graph. This method allows us not only to design the self-assembly of much larger shell structures but also to apply additional design criteria, as a complete catalog of the maximum leaf spanning trees is obtained.
Evolution of worldwide stock markets, correlation structure, and correlation-based graphs
NASA Astrophysics Data System (ADS)
Song, Dong-Ming; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N.
2011-08-01
We investigate the daily correlation present among market indices of stock exchanges located all over the world in the time period January 1996 to July 2009. We discover that the correlation among market indices presents both a fast and a slow dynamics. The slow dynamics reflects the development and consolidation of globalization. The fast dynamics is associated with critical events that originate in a specific country or region of the world and rapidly affect the global system. We provide evidence that the short term time scale of correlation among market indices is less than 3 trading months (about 60 trading days). The average values of the nondiagonal elements of the correlation matrix, correlation-based graphs, and the spectral properties of the largest eigenvalues and eigenvectors of the correlation matrix are carrying information about the fast and slow dynamics of the correlation of market indices. We introduce a measure of mutual information based on link co-occurrence in networks in order to detect the fast dynamics of successive changes of correlation-based graphs in a quantitative way.
Modelling Chemical Reasoning to Predict and Invent Reactions.
Segler, Marwin H S; Waller, Mark P
2017-05-02
The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spectral mapping of brain functional connectivity from diffusion imaging.
Becker, Cassiano O; Pequito, Sérgio; Pappas, George J; Miller, Michael B; Grafton, Scott T; Bassett, Danielle S; Preciado, Victor M
2018-01-23
Understanding the relationship between the dynamics of neural processes and the anatomical substrate of the brain is a central question in neuroscience. On the one hand, modern neuroimaging technologies, such as diffusion tensor imaging, can be used to construct structural graphs representing the architecture of white matter streamlines linking cortical and subcortical structures. On the other hand, temporal patterns of neural activity can be used to construct functional graphs representing temporal correlations between brain regions. Although some studies provide evidence that whole-brain functional connectivity is shaped by the underlying anatomy, the observed relationship between function and structure is weak, and the rules by which anatomy constrains brain dynamics remain elusive. In this article, we introduce a methodology to map the functional connectivity of a subject at rest from his or her structural graph. Using our methodology, we are able to systematically account for the role of structural walks in the formation of functional correlations. Furthermore, in our empirical evaluations, we observe that the eigenmodes of the mapped functional connectivity are associated with activity patterns associated with different cognitive systems.
Xiong, Zheng; He, Yinyan; Hattrick-Simpers, Jason R; Hu, Jianjun
2017-03-13
The creation of composition-processing-structure relationships currently represents a key bottleneck for data analysis for high-throughput experimental (HTE) material studies. Here we propose an automated phase diagram attribution algorithm for HTE data analysis that uses a graph-based segmentation algorithm and Delaunay tessellation to create a crystal phase diagram from high throughput libraries of X-ray diffraction (XRD) patterns. We also propose the sample-pair based objective evaluation measures for the phase diagram prediction problem. Our approach was validated using 278 diffraction patterns from a Fe-Ga-Pd composition spread sample with a prediction precision of 0.934 and a Matthews Correlation Coefficient score of 0.823. The algorithm was then applied to the open Ni-Mn-Al thin-film composition spread sample to obtain the first predicted phase diagram mapping for that sample.
NASA Astrophysics Data System (ADS)
Awasarmol, V. V.; Gaikwad, D. K.; Raut, S. D.; Pawar, P. P.
The mass attenuation coefficients (μm) for organic nonlinear optical materials measured at 122-1330 keV photon energies were investigated on the basis of mixture rule and compared with obtained values of WinXCOM program. It is observed that there is a good agreement between theoretical and experimental values of the samples. All samples were irradiated with six radioactive sources such as 57Co, 133Ba, 22Na, 137Cs, 54Mn and 60Co using transmission arrangement. Effective atomic and electron numbers or electron densities (Zeff and Neff), molar extinction coefficient (ε), mass energy absorption coefficient (μen/ρ) and effective atomic energy absorption cross section (σa,en) were determined experimentally and theoretically using the obtained μm values for investigated samples and graphs have been plotted. The graph shows that the variation of all samples decreases with increasing photon energy.
Default and Executive Network Coupling Supports Creative Idea Production
Beaty, Roger E.; Benedek, Mathias; Barry Kaufman, Scott; Silvia, Paul J.
2015-01-01
The role of attention in creative cognition remains controversial. Neuroimaging studies have reported activation of brain regions linked to both cognitive control and spontaneous imaginative processes, raising questions about how these regions interact to support creative thought. Using functional magnetic resonance imaging (fMRI), we explored this question by examining dynamic interactions between brain regions during a divergent thinking task. Multivariate pattern analysis revealed a distributed network associated with divergent thinking, including several core hubs of the default (posterior cingulate) and executive (dorsolateral prefrontal cortex) networks. The resting-state network affiliation of these regions was confirmed using data from an independent sample of participants. Graph theory analysis assessed global efficiency of the divergent thinking network, and network efficiency was found to increase as a function of individual differences in divergent thinking ability. Moreover, temporal connectivity analysis revealed increased coupling between default and salience network regions (bilateral insula) at the beginning of the task, followed by increased coupling between default and executive network regions at later stages. Such dynamic coupling suggests that divergent thinking involves cooperation between brain networks linked to cognitive control and spontaneous thought, which may reflect focused internal attention and the top-down control of spontaneous cognition during creative idea production. PMID:26084037
Villandre, Luc; Günthard, Huldrych F.; Kouyos, Roger; Stadler, Tanja
2016-01-01
Background Transmission patterns of sexually-transmitted infections (STIs) could relate to the structure of the underlying sexual contact network, whose features are therefore of interest to clinicians. Conventionally, we represent sexual contacts in a population with a graph, that can reveal the existence of communities. Phylogenetic methods help infer the history of an epidemic and incidentally, may help detecting communities. In particular, phylogenetic analyses of HIV-1 epidemics among men who have sex with men (MSM) have revealed the existence of large transmission clusters, possibly resulting from within-community transmissions. Past studies have explored the association between contact networks and phylogenies, including transmission clusters, producing conflicting conclusions about whether network features significantly affect observed transmission history. As far as we know however, none of them thoroughly investigated the role of communities, defined with respect to the network graph, in the observation of clusters. Methods The present study investigates, through simulations, community detection from phylogenies. We simulate a large number of epidemics over both unweighted and weighted, undirected random interconnected-islands networks, with islands corresponding to communities. We use weighting to modulate distance between islands. We translate each epidemic into a phylogeny, that lets us partition our samples of infected subjects into transmission clusters, based on several common definitions from the literature. We measure similarity between subjects’ island membership indices and transmission cluster membership indices with the adjusted Rand index. Results and Conclusion Analyses reveal modest mean correspondence between communities in graphs and phylogenetic transmission clusters. We conclude that common methods often have limited success in detecting contact network communities from phylogenies. The rarely-fulfilled requirement that network communities correspond to clades in the phylogeny is their main drawback. Understanding the link between transmission clusters and communities in sexual contact networks could help inform policymaking to curb HIV incidence in MSMs. PMID:26863322
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Richard; Lemke, Peter
Water samples were collected from 36 locations at New Rifle and Old Rifle, Colorado, Processing Sites. Duplicate samples were collected from New Rifle locations 0659 and 0855, and Old Rifle location 0304. One equipment blank was collected after decontamination of non-dedicated equipment used to collect one surface water sample. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated). New Rifle Site Samples were collected at the New Rifle site from 16 monitoring wells and 7 surface locations in compliance with the December 2008more » Groundwater Compliance Action Plan [GCAP] for the New Rifle, Colorado, Processing Site (LMS/RFN/S01920), with one exception: New Rifle location 0635 could not be sampled because it was inaccessible; a fence installed by the Colorado Department of Transportation prevents access to this location. DOE is currently negotiating access with the Colorado Department of Transportation. Analytes measured at the New Rifle site included contaminants of concern (COCs) (arsenic, molybdenum, nitrate + nitrite as nitrogen, selenium, uranium, and vanadium) ammonia as nitrogen, major cations, and major anions. Field measurements of total alkalinity, oxidation- reduction potential, pH, specific conductance, turbidity, and temperature were made at each location, and the water level was measured at each sampled well. A proposed alternate concentration limit (ACL) for vanadium of 50 milligrams per liter (mg/L), specific to the compliance (POC) wells (RFN-0217, -0659, -0664, and -0669) is included in the New Rifle GCAP. Vanadium concentrations in the POC wells were below the proposed ACL as shown in the time-concentration graphs in the Data Presentation section (Attachment 2). Time-concentration graphs from all other locations sampled are also included in Attachment 2. Sampling location RFN-0195 was misidentified for the June/August 2014 and November 2014 sampling events. (Well RFN-0609 was inadvertently sampled instead of RFN-0195 in 2014.) The results for RFN-0195 have been corrected, and are included in associated time-concentration graphs for this location. Recent results for RFN-0195 are consistent with established trends with the possible exception of vanadium. The most recent result for vanadium showed an increase over recent values. Vanadium concentrations at RFN-0195 and other locations will continue to be evaluated in the future to determine the potential for deviations from established trends. The surface water locations were sampled to monitor the impact of groundwater discharge. COC concentrations at Colorado River surface water locations RFN-0324 and RFN-0326, downgradient of the site, remained low and were consistent with historical results, as shown in the time-concentration graphs. COC concentrations did not indicate there are any impacts related to groundwater discharge to the river. In many cases, elevated COC concentrations at the New Rifle site pond locations were observed, as shown in the time-versus concentration graphs. As noted in the GCAP, this indicates impacts from groundwater discharge to the ponds. Old Rifle Site Samples were collected at the Old Rifle site from eight monitoring wells and five surface locations in compliance with the December 2001 Groundwater Compliance Action Plan for the Old Rifle, Colorado, UMTRA Project Site (GJ0-2000-177-TAR). Analytes measured at the Old Rifle site included COCs (selenium, uranium, and vanadium), major cations, and major anions. Field measurements of total alkalinity, oxidation-reduction potential, pH, specific conductance, turbidity, temperature, were made at each location, and the water level was measured at each sampled well. The monitoring strategy described in the GCAP is designed to determine progress of the natural flushing process in meeting compliance standards for site COCs. Standards for selenium and vanadium are the proposed ACLs of0.05 mg/L and 1.0 mg/L, respectively. For uranium the cleanup goal is the UMTRA standard of 0.044 mg/L or background, whichever is higher. As shown in the time concentration graphs, the uranium concentration exceeds the cleanup goal at groundwater monitoring locations RF0-0304, -0305, -0310, -0655, and -0656. The surface water locations were sampled to monitor the impact of groundwater discharge at Colorado River surface water locations adjacent to (RF0-0396) and downgradient of the site (RF0-0741). COC concentrations remain low and consistent with historical concentrations as shown in the time-concentration graphs (Attachment 2), which indicate no impacts from groundwater discharge to the river.« less
Overlapping clusters for distributed computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirrokni, Vahab; Andersen, Reid; Gleich, David F.
2010-11-01
Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initialmore » partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.« less
Functional Organization of the Action Observation Network in Autism: A Graph Theory Approach
Alaerts, Kaat; Geerlings, Franca; Herremans, Lynn; Swinnen, Stephan P.; Verhoeven, Judith; Sunaert, Stefan; Wenderoth, Nicole
2015-01-01
Background The ability to recognize, understand and interpret other’s actions and emotions has been linked to the mirror system or action-observation-network (AON). Although variations in these abilities are prevalent in the neuro-typical population, persons diagnosed with autism spectrum disorders (ASD) have deficits in the social domain and exhibit alterations in this neural network. Method Here, we examined functional network properties of the AON using graph theory measures and region-to-region functional connectivity analyses of resting-state fMRI-data from adolescents and young adults with ASD and typical controls (TC). Results Overall, our graph theory analyses provided convergent evidence that the network integrity of the AON is altered in ASD, and that reductions in network efficiency relate to reductions in overall network density (i.e., decreased overall connection strength). Compared to TC, individuals with ASD showed significant reductions in network efficiency and increased shortest path lengths and centrality. Importantly, when adjusting for overall differences in network density between ASD and TC groups, participants with ASD continued to display reductions in network integrity, suggesting that also network-level organizational properties of the AON are altered in ASD. Conclusion While differences in empirical connectivity contributed to reductions in network integrity, graph theoretical analyses provided indications that also changes in the high-level network organization reduced integrity of the AON. PMID:26317222
Low-Rank Discriminant Embedding for Multiview Learning.
Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke
2017-11-01
This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
2011-01-01
Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854
An efficient semi-supervised community detection framework in social networks.
Li, Zhen; Gong, Yong; Pan, Zhisong; Hu, Guyu
2017-01-01
Community detection is an important tasks across a number of research fields including social science, biology, and physics. In the real world, topology information alone is often inadequate to accurately find out community structure due to its sparsity and noise. The potential useful prior information such as pairwise constraints which contain must-link and cannot-link constraints can be obtained from domain knowledge in many applications. Thus, combining network topology with prior information to improve the community detection accuracy is promising. Previous methods mainly utilize the must-link constraints while cannot make full use of cannot-link constraints. In this paper, we propose a semi-supervised community detection framework which can effectively incorporate two types of pairwise constraints into the detection process. Particularly, must-link and cannot-link constraints are represented as positive and negative links, and we encode them by adding different graph regularization terms to penalize closeness of the nodes. Experiments on multiple real-world datasets show that the proposed framework significantly improves the accuracy of community detection.
Exactly solvable random graph ensemble with extensively many short cycles
NASA Astrophysics Data System (ADS)
Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.
2018-02-01
We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.
NASA Astrophysics Data System (ADS)
Camilo, Ana E. F.; Grégio, André; Santos, Rafael D. C.
2016-05-01
Malware detection may be accomplished through the analysis of their infection behavior. To do so, dynamic analysis systems run malware samples and extract their operating system activities and network traffic. This traffic may represent malware accessing external systems, either to steal sensitive data from victims or to fetch other malicious artifacts (configuration files, additional modules, commands). In this work, we propose the use of visualization as a tool to identify compromised systems based on correlating malware communications in the form of graphs and finding isomorphisms between them. We produced graphs from over 6 thousand distinct network traffic files captured during malware execution and analyzed the existing relationships among malware samples and IP addresses.
Resolution of ranking hierarchies in directed networks.
Letizia, Elisa; Barucca, Paolo; Lillo, Fabrizio
2018-01-01
Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit.
Resolution of ranking hierarchies in directed networks
Barucca, Paolo; Lillo, Fabrizio
2018-01-01
Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit. PMID:29394278
LinkMind: link optimization in swarming mobile sensor networks.
Ngo, Trung Dung
2011-01-01
A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation.
LinkMind: Link Optimization in Swarming Mobile Sensor Networks
Ngo, Trung Dung
2011-01-01
A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation. PMID:22164070
NASA Astrophysics Data System (ADS)
Mali, P.; Manna, S. K.; Mukhopadhyay, A.; Haldar, P. K.; Singh, G.
2018-03-01
Multiparticle emission data in nucleus-nucleus collisions are studied in a graph theoretical approach. The sandbox algorithm used to analyze complex networks is employed to characterize the multifractal properties of the visibility graphs associated with the pseudorapidity distribution of charged particles produced in high-energy heavy-ion collisions. Experimental data on 28Si+Ag/Br interaction at laboratory energy Elab = 14 . 5 A GeV, and 16O+Ag/Br and 32S+Ag/Br interactions both at Elab = 200 A GeV, are used in this analysis. We observe a scale free nature of the degree distributions of the visibility and horizontal visibility graphs associated with the event-wise pseudorapidity distributions. Equivalent event samples simulated by ultra-relativistic quantum molecular dynamics, produce degree distributions that are almost identical to the respective experiment. However, the multifractal variables obtained by using sandbox algorithm for the experiment to some extent differ from the respective simulated results.
Some Data from Detection of Organics in a Rock on Mars
2014-12-16
Data graphed here are examples from the Sample Analysis at Mars SAM laboratory detection of Martian organics in a sample of powder that the drill on NASA Curiosity Mars rover collected from a rock target called Cumberland.
Unsupervised object segmentation with a hybrid graph model (HGM).
Liu, Guangcan; Lin, Zhouchen; Yu, Yong; Tang, Xiaoou
2010-05-01
In this work, we address the problem of performing class-specific unsupervised object segmentation, i.e., automatic segmentation without annotated training images. Object segmentation can be regarded as a special data clustering problem where both class-specific information and local texture/color similarities have to be considered. To this end, we propose a hybrid graph model (HGM) that can make effective use of both symmetric and asymmetric relationship among samples. The vertices of a hybrid graph represent the samples and are connected by directed edges and/or undirected ones, which represent the asymmetric and/or symmetric relationship between them, respectively. When applied to object segmentation, vertices are superpixels, the asymmetric relationship is the conditional dependence of occurrence, and the symmetric relationship is the color/texture similarity. By combining the Markov chain formed by the directed subgraph and the minimal cut of the undirected subgraph, the object boundaries can be determined for each image. Using the HGM, we can conveniently achieve simultaneous segmentation and recognition by integrating both top-down and bottom-up information into a unified process. Experiments on 42 object classes (9,415 images in total) show promising results.
The Mathematics of Motion, Sensors, and the Introduction of Function to Eight Graders in Brazil.
ERIC Educational Resources Information Center
Borba, Marcelo C.; Scheffer, Nilce Fatima
This paper describes how 8th grade students are using CBR, a motion detector linked to a graphing calculator, as a way of generating mathematical ideas regarding the motions concepts that surround their action. Students were previously introduced to the calculators in the classroom and teaching experiments were then carried out afterwards with a…
ERIC Educational Resources Information Center
Montoye, Alexander H. K.; Conger, Scott A.; Connolly, Christopher P.; Imboden, Mary T.; Nelson, M. Benjamin; Bock, Josh M.; Kaminsky, Leonard A.
2017-01-01
This study compared accuracy of energy expenditure (EE) prediction models from accelerometer data collected in structured and simulated free-living settings. Twenty-four adults (mean age 45.8 years, 50% female) performed two sessions of 11 to 21 activities, wearing four ActiGraph GT9X Link activity monitors (right hip, ankle, both wrists) and a…
Intratheater Airlift Functional Needs Analysis (FNA)
2011-01-01
information on reprint and linking permissions, please see RAND Permissions. Skip all front matter: Jump to Page 16 The RAND Corporation is a nonprofit...facing the public and private sectors. All RAND mono- graphs undergo rigorous peer review to ensure high standards for research quality and...personnel. xii Intratheater Airlift Functional Needs Analysis all operating environments. The FNA assesses the ability of current assets to
A Study towards Building An Optimal Graph Theory Based Model For The Design of Tourism Website
NASA Astrophysics Data System (ADS)
Panigrahi, Goutam; Das, Anirban; Basu, Kajla
2010-10-01
Effective tourism website is a key to attract tourists from different parts of the world. Here we identify the factors of improving the effectiveness of website by considering it as a graph, where web pages including homepage are the nodes and hyperlinks are the edges between the nodes. In this model, the design constraints for building a tourism website are taken into consideration. Our objectives are to build a framework of an effective tourism website providing adequate level of information, service and also to enable the users to reach to the desired page by spending minimal loading time. In this paper an information hierarchy specifying the upper limit of outgoing link of a page has also been proposed. Following the hierarchy, the web developer can prepare an effective tourism website. Here loading time depends on page size and network traffic. We have assumed network traffic as uniform and the loading time is directly proportional with page size. This approach is done by quantifying the link structure of a tourism website. In this approach we also propose a page size distribution pattern of a tourism website.
Node degree distribution in spanning trees
NASA Astrophysics Data System (ADS)
Pozrikidis, C.
2016-03-01
A method is presented for computing the number of spanning trees involving one link or a specified group of links, and excluding another link or a specified group of links, in a network described by a simple graph in terms of derivatives of the spanning-tree generating function defined with respect to the eigenvalues of the Kirchhoff (weighted Laplacian) matrix. The method is applied to deduce the node degree distribution in a complete or randomized set of spanning trees of an arbitrary network. An important feature of the proposed method is that the explicit construction of spanning trees is not required. It is shown that the node degree distribution in the spanning trees of the complete network is described by the binomial distribution. Numerical results are presented for the node degree distribution in square, triangular, and honeycomb lattices.
[Environmental Education Units.
ERIC Educational Resources Information Center
Minneapolis Independent School District 275, Minn.
Two of these three pamphlets describe methods of teaching young elementary school children the principles of sampling. Tiles of five colors are added to a tub and children sample these randomly; using the tiles as units for a graph, they draw a representation of the population. Pooling results leads to a more reliable sample. Practice is given in…
Sampling from complex networks using distributed learning automata
NASA Astrophysics Data System (ADS)
Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza
2014-02-01
A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.
Graph and Network for Model Elicitation (GNOME Phase 2)
2013-02-01
10 3.3 GNOME UI Components for NOEM Web Client...20 Figure 17: Sampling in Web -client...the web -client). The server-side service can run and generate data asynchronously, allowing a cluster of servers to run the sampling. Also, a
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
Abnormal functional global and local brain connectivity in female patients with anorexia nervosa
Geisler, Daniel; Borchardt, Viola; Lord, Anton R.; Boehm, Ilka; Ritschel, Franziska; Zwipp, Johannes; Clas, Sabine; King, Joseph A.; Wolff-Stephan, Silvia; Roessner, Veit; Walter, Martin; Ehrlich, Stefan
2016-01-01
Background Previous resting-state functional connectivity studies in patients with anorexia nervosa used independent component analysis or seed-based connectivity analysis to probe specific brain networks. Instead, modelling the entire brain as a complex network allows determination of graph-theoretical metrics, which describe global and local properties of how brain networks are organized and how they interact. Methods To determine differences in network properties between female patients with acute anorexia nervosa and pairwise matched healthy controls, we used resting-state fMRI and computed well-established global and local graph metrics across a range of network densities. Results Our analyses included 35 patients and 35 controls. We found that the global functional network structure in patients with anorexia nervosa is characterized by increases in both characteristic path length (longer average routes between nodes) and assortativity (more nodes with a similar connectedness link together). Accordingly, we found locally decreased connectivity strength and increased path length in the posterior insula and thalamus. Limitations The present results may be limited to the methods applied during preprocessing and network construction. Conclusion We demonstrated anorexia nervosa–related changes in the network configuration for, to our knowledge, the first time using resting-state fMRI and graph-theoretical measures. Our findings revealed an altered global brain network architecture accompanied by local degradations indicating wide-scale disturbance in information flow across brain networks in patients with acute anorexia nervosa. Reduced local network efficiency in the thalamus and posterior insula may reflect a mechanism that helps explain the impaired integration of visuospatial and homeostatic signals in patients with this disorder, which is thought to be linked to abnormal representations of body size and hunger. PMID:26252451
Abnormal functional global and local brain connectivity in female patients with anorexia nervosa.
Geisler, Daniel; Borchardt, Viola; Lord, Anton R; Boehm, Ilka; Ritschel, Franziska; Zwipp, Johannes; Clas, Sabine; King, Joseph A; Wolff-Stephan, Silvia; Roessner, Veit; Walter, Martin; Ehrlich, Stefan
2016-01-01
Previous resting-state functional connectivity studies in patients with anorexia nervosa used independent component analysis or seed-based connectivity analysis to probe specific brain networks. Instead, modelling the entire brain as a complex network allows determination of graph-theoretical metrics, which describe global and local properties of how brain networks are organized and how they interact. To determine differences in network properties between female patients with acute anorexia nervosa and pairwise matched healthy controls, we used resting-state fMRI and computed well-established global and local graph metrics across a range of network densities. Our analyses included 35 patients and 35 controls. We found that the global functional network structure in patients with anorexia nervosa is characterized by increases in both characteristic path length (longer average routes between nodes) and assortativity (more nodes with a similar connectedness link together). Accordingly, we found locally decreased connectivity strength and increased path length in the posterior insula and thalamus. The present results may be limited to the methods applied during preprocessing and network construction. We demonstrated anorexia nervosa-related changes in the network configuration for, to our knowledge, the first time using resting-state fMRI and graph-theoretical measures. Our findings revealed an altered global brain network architecture accompanied by local degradations indicating wide-scale disturbance in information flow across brain networks in patients with acute anorexia nervosa. Reduced local network efficiency in the thalamus and posterior insula may reflect a mechanism that helps explain the impaired integration of visuospatial and homeostatic signals in patients with this disorder, which is thought to be linked to abnormal representations of body size and hunger.
NASA Astrophysics Data System (ADS)
Arosio, Marcello; Martina, Mario L. V.
2017-04-01
The emergent behaviour of the contemporary complex, socio-technical and interconnected society makes the collective risk greater than the sum of the parts and this requires a holistic, systematic and integrated approach. Although there have been major improvements in recent years, there are still some limitation in term of a holistic approach that is able to include the emergent value hidden in the connections between exposed elements and the interactions between the different spheres of the multi-hazards, vulnerability, exposure and resilience. To deal with these challenges it is necessary to consider the connections between the exposed elements (e.g. populations, schools, hospital, etc.) and to quantify the relative importance of the elements and their interconnections (e.g. the need of injured people to go to hospital or children to school). In a system (e.g. road, hospital and ecological network, etc.), or in a System of System (e.g. socio-technical urban service), there are critical elements that, beyond the intrinsic vulnerability, can be characterized by greater or lower vulnerability because of their physical, geographical, cyber or logical connections. To this aim, we propose in this study a comparative analysis between traditional reductionist approach and a new holistic approach to vulnerability assessment to natural hazards. The analysis considers a study case of a socio-economic complex system through an innovative approach based on the properties of graph G=(N,L). A graph consists of two sets N (nodes) and L (links): the nodes represent the single exposed elements (physical, social, environmental, etc.) to a hazard, while the links (or connections) represent the interaction between the elements. The final goal is to illustrate an application of this innovative approach of integrated collective vulnerability assessment.
Optimal Link Removal for Epidemic Mitigation: A Two-Way Partitioning Approach
Enns, Eva A.; Mounzer, Jeffrey J.; Brandeau, Margaret L.
2011-01-01
The structure of the contact network through which a disease spreads may influence the optimal use of resources for epidemic control. In this work, we explore how to minimize the spread of infection via quarantining with limited resources. In particular, we examine which links should be removed from the contact network, given a constraint on the number of removable links, such that the number of nodes which are no longer at risk for infection is maximized. We show how this problem can be posed as a non-convex quadratically constrained quadratic program (QCQP), and we use this formulation to derive a link removal algorithm. The performance of our QCQP-based algorithm is validated on small Erdős-Renyi and small-world random graphs, and then tested on larger, more realistic networks, including a real-world network of injection drug use. We show that our approach achieves near optimal performance and out-perform so ther intuitive link removal algorithms, such as removing links in order of edge centrality. PMID:22115862
Novel approaches to analysis by flow injection gradient titration.
Wójtowicz, Marzena; Kozak, Joanna; Kościelniak, Paweł
2007-09-26
Two novel procedures for flow injection gradient titration with the use of a single stock standard solution are proposed. In the multi-point single-line (MP-SL) method the calibration graph is constructed on the basis of a set of standard solutions, which are generated in a standard reservoir and subsequently injected into the titrant. According to the single-point multi-line (SP-ML) procedure the standard solution and a sample are injected into the titrant stream from four loops of different capacities, hence four calibration graphs are able to be constructed and the analytical result is calculated on the basis of a generalized slope of these graphs. Both approaches have been tested on the example of spectrophotometric acid-base titration of hydrochloric and acetic acids with using bromothymol blue and phenolphthalein as indicators, respectively, and sodium hydroxide as a titrant. Under optimized experimental conditions the analytical results of precision less than 1.8 and 2.5% (RSD) and of accuracy less than 3.0 and 5.4% (relative error (RE)) were obtained for MP-SL and SP-ML procedures, respectively, in ranges of 0.0031-0.0631 mol L(-1) for samples of hydrochloric acid and of 0.1680-1.7600 mol L(-1) for samples of acetic acid. The feasibility of both methods was illustrated by applying them to the total acidity determination in vinegar samples with precision lower than 0.5 and 2.9% (RSD) for MP-SL and SP-ML procedures, respectively.
Lin, Jimmy
2008-01-01
Background Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed® search interface, a MEDLINE® citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. Results We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. Conclusion The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain. PMID:18538027
Reticulate classification of mosaic microbial genomes using NeAT website.
Lima-Mendez, Gipsi
2012-01-01
The tree of life is the classical representation of the evolutionary relationships between existent species. A tree is appropriate to display the divergence of species through mutation, i.e., by vertical descent. However, lateral gene transfer (LGT) is excluded from such representations. When LGT contribution to genome evolution cannot be neglected (e.g., for prokaryotes and mobile genetic elements), the tree becomes misleading. Networks appear as an intuitive way to represent both vertical and horizontal relationships, while overlapping groups within such graphs are more suitable for their classification. Here, we describe a method to represent both vertical and horizontal relationships. We start with a set of genomes whose coded proteins have been grouped into families based on sequence similarity. Next, all pairs of genomes are compared, counting the number of proteins classified into the same family. From this comparison, we derive a weighted graph where genomes with a significant number of similar proteins are linked. Finally, we apply a two-step clustering of this graph to produce a classification where nodes can be assigned to multiple clusters. The procedure can be performed using the Network Analysis Tools (NeAT) website.
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam
2008-02-01
Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.
Edge grouping combining boundary and region information.
Stahl, Joachim S; Wang, Song
2007-10-01
This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
NASA Technical Reports Server (NTRS)
Lindstrom, M. M.; Lindstrom, D. J.; Lum, R. K. L.; Schuhmann, P. J.; Nava, D. F.; Schuhmann, S.; Philpotts, J. A.; Winzer, S. R.
1977-01-01
The samples of the White Breccia Boulders obtained during the Apollo 16 mission and investigated in the reported study include an anorthositic breccia (67415), a dark matrix breccia (67435), a light matrix breccia (67455), and a large clast of dark matrix breccia (67475) taken from the 67455 boulder. The chemical analyses of bulk samples of the samples are listed in a table. A graph shows the lithophile trace element abundances. Another graph indicates the variation of Sm with Al2O3 content for samples from the White Breccia Boulders. The North Ray Crater breccias are found to be in general slightly more aluminous than breccias from the other stations at the Apollo 16 site. Analyses of eight Apollo 16 breccias cited in the literature range from 25% to 35% Al2O3. However, the North Ray Crater breccias are more clearly distinct from the other Apollo 16 breccias in their contents of lithophile trace elements.
NASA Astrophysics Data System (ADS)
Kole, Goutam Kumar; Kumar, Mukesh
2018-07-01
Thiourea is known to act as a template to preorganise a series of trans-1,2-bispyridyl ethylenes (bpe), where the thiourea molecules present in an infinite zigzag chain with R22(8) graph set (the β-tape) which offers three different types of hydrogen bonding [J. Am. Chem. Soc. 132 (2010) 13434]. This article reports a new cocrystal of thiourea with 3,4‧-bpe and acts as a 'missing link' in the series. In this cocrystal, thiourea present in an infinite corrugated chain with R21(6) graph set, a rarely observed thiourea synthon i.e. α-tape. A comparative study has been discussed which demonstrates various types of hydrogen bonding that exist in the series and their impact on the parallel stacking of the pyridyl based olefins.
NASA Technical Reports Server (NTRS)
Bostian, C. W.; Holt, S. B., Jr.; Kauffman, S. R.; Manus, E. A.; Marshall, R. E.; Stuzman, W. L.; Wiley, P. H.
1977-01-01
The considered investigation made use of the Communications Technology Satellite (CTS) downlink and the beacons carried by the Comstar satellites. The general behavior of rain attenuation and depolarization is illustrated with the aid of data from a storm which took place on July 15, 1976. The effect of the rain on the copolarized signal is indicated in a graph. Another graph shows the behavior of the cross-polarized signal component. Phase effects are also considered together with statistical curves for attenuation. The considered data from CTS indicate that, at least during summer convective storms, attenuation at 11.7 GHz is much more severe than anticipated. Attenuation may be a more serious impediment to dual polarized satellite links at this frequency than is depolarization.
Graph-Theoretic Analysis of Monomethyl Phosphate Clustering in Ionic Solutions.
Han, Kyungreem; Venable, Richard M; Bryant, Anne-Marie; Legacy, Christopher J; Shen, Rong; Li, Hui; Roux, Benoît; Gericke, Arne; Pastor, Richard W
2018-02-01
All-atom molecular dynamics simulations combined with graph-theoretic analysis reveal that clustering of monomethyl phosphate dianion (MMP 2- ) is strongly influenced by the types and combinations of cations in the aqueous solution. Although Ca 2+ promotes the formation of stable and large MMP 2- clusters, K + alone does not. Nonetheless, clusters are larger and their link lifetimes are longer in mixtures of K + and Ca 2+ . This "synergistic" effect depends sensitively on the Lennard-Jones interaction parameters between Ca 2+ and the phosphorus oxygen and correlates with the hydration of the clusters. The pronounced MMP 2- clustering effect of Ca 2+ in the presence of K + is confirmed by Fourier transform infrared spectroscopy. The characterization of the cation-dependent clustering of MMP 2- provides a starting point for understanding cation-dependent clustering of phosphoinositides in cell membranes.
Stroganov, Oleg V; Novikov, Fedor N; Zeifman, Alexey A; Stroylov, Viktor S; Chilov, Ghermes G
2011-09-01
A new graph-theoretical approach called thermodynamic sampling of amino acid residues (TSAR) has been elaborated to explicitly account for the protein side chain flexibility in modeling conformation-dependent protein properties. In TSAR, a protein is viewed as a graph whose nodes correspond to structurally independent groups and whose edges connect the interacting groups. Each node has its set of states describing conformation and ionization of the group, and each edge is assigned an array of pairwise interaction potentials between the adjacent groups. By treating the obtained graph as a belief-network-a well-established mathematical abstraction-the partition function of each node is found. In the current work we used TSAR to calculate partition functions of the ionized forms of protein residues. A simplified version of a semi-empirical molecular mechanical scoring function, borrowed from our Lead Finder docking software, was used for energy calculations. The accuracy of the resulting model was validated on a set of 486 experimentally determined pK(a) values of protein residues. The average correlation coefficient (R) between calculated and experimental pK(a) values was 0.80, ranging from 0.95 (for Tyr) to 0.61 (for Lys). It appeared that the hydrogen bond interactions and the exhaustiveness of side chain sampling made the most significant contribution to the accuracy of pK(a) calculations. Copyright © 2011 Wiley-Liss, Inc.
Comparative Financial Statistics for Public Two-Year Colleges: FY 1995 National Sample.
ERIC Educational Resources Information Center
Meeker, Bradley
Based on responses by 405 public two-year colleges in the United States to 2 surveys, this report provides comparative financial information for fiscal year 1994-95. The report provides space for colleges to compare their institutional statistics with national sample medians, quartile data for the national sample, and tables and graphs of…
Comparative Financial Statistics for Public Two-Year Colleges: FY 1994 National Sample.
ERIC Educational Resources Information Center
Dickmeyer, Nathan; Meeker, Bradley
Based on responses by 427 public two-year colleges in the United States to two surveys, this report provides comparative financial information for fiscal year 1993-94. The report provides space for colleges to compare their institutional statistics with national sample medians, quartile data for the national sample, and tables and graphs of…
People Patterns: Statistics. Environmental Module for Use in a Mathematics Laboratory Setting.
ERIC Educational Resources Information Center
Zastrocky, Michael; Trojan, Arthur
This module on statistics consists of 18 worksheets that cover such topics as sample spaces, mean, median, mode, taking samples, posting results, analyzing data, and graphing. The last four worksheets require the students to work with samples and use these to compare people's responses. A computer dating service is one result of this work.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
Adluru, Nagesh; Yang, Xingwei; Latecki, Longin Jan
2015-05-01
We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation.
Sequential Monte Carlo for Maximum Weight Subgraphs with Application to Solving Image Jigsaw Puzzles
Adluru, Nagesh; Yang, Xingwei; Latecki, Longin Jan
2015-01-01
We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation. PMID:26052182
Inexpensive portable drug detector
NASA Technical Reports Server (NTRS)
Dimeff, J.; Heimbuch, A. H.; Parker, J. A.
1977-01-01
Inexpensive, easy-to-use, self-scanning, self-calibrating, portable unit automatically graphs fluorescence spectrum of drug sample. Device also measures rate of movement through chromatographic column for forensic and medical testing.
Network dynamics: The World Wide Web
NASA Astrophysics Data System (ADS)
Adamic, Lada Ariana
Despite its rapidly growing and dynamic nature, the Web displays a number of strong regularities which can be understood by drawing on methods of statistical physics. This thesis finds power-law distributions in website sizes, traffic, and links, and more importantly, develops a stochastic theory which explains them. Power-law link distributions are shown to lead to network characteristics which are especially suitable for scalable localized search. It is also demonstrated that the Web is a "small world": to reach one site from any other takes an average of only 4 hops, while most related sites cluster together. Additional dynamical properties of the Web graph are extracted from diffusion processes.
Text and Structural Data Mining of Influenza Mentions in Web and Social Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corley, Courtney D.; Cook, Diane; Mikler, Armin R.
Text and structural data mining of Web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5-October-2008 to 21-March-2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like-illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.
Seroprevalence screening for the West Nile virus in Malaysia's Orang Asli population.
Marlina, Suria; Radzi, Siti Fatimah Muhd; Lani, Rafidah; Sieng, Khor Chee; Rahim, Nurul Farhana Abdul; Hassan, Habibi; Li-Yen, Chang; AbuBakar, Sazaly; Zandi, Keivan
2014-12-17
West Nile virus (WNV) infection is an emerging zoonotic disease caused by an RNA virus of the genus Flavivirus. WNV is preserved in the environment through cyclic transmission, with mosquitoes, particularly Culex species, serving as a vector, birds as an amplifying host and humans and other mammals as dead-end hosts. To date, no studies have been carried out to determine the prevalence of the WNV antibody in Malaysia. The aim of this study was to screen for the seroprevalence of the WNV in Malaysia's Orang Asli population. Serum samples of 742 Orang Asli were collected in seven states in peninsular Malaysia. The samples were assessed to determine the seroprevalence of WNV immunoglobulin (Ig)G with the WNV IgG enzyme-linked immunosorbent assay (ELISA) method. For each individual, we documented the demographic factors. Anti-dengue and anti-tick-borne encephalitis virus IgG ELISA were also performed to rule out a cross reaction. All statistical analyses were performed using the GraphPad Prism 6 (GraphPad Software, Inc.); p values of less than 0.05 were considered significant. The serosurvey included 298 men (40.16%) and 444 women (59.84%) of Malaysia's Orang Asli. Anti-WNV IgG was found in 9 of the 742 samples (1.21%). The seroprevalence was 0.67% (2 of 298) in men and 1.58% (7 of 444) in women. The presence of anti-WNV IgG was found not to be associated with gender but, however, did correlate with age. The peak seroprevalence was found to be 2.06% (2 of 97) in individuals between 30 to 42 years of age. No previous studies have examined the seroprevalence of the WNV antibody in the human population in Malaysia, and no clinical reports of infections have been made. Screening for the WNV seroprevalence is very significant because of many risk factors contribute to the presence of WNV in Malaysia, such as the abundance of Culex mosquitoes as the main vector and a high degree of biodiversity, including migratory birds that serve as a reservoir to the virus.
Wang, S C; Ding, M M; Wei, X L; Zhang, T; Yao, F
2016-06-01
To recognize the possibility of Y fragment deletion of Amelogenin gene intuitively and simply according to the genotyping graphs. By calculating the ratio of total peak height of genotyping graphs, the statistics of equilibrium distribution between Amelogenin and D3S1358 loci, Amelogenin X-gene and Amelogenin Y-gene, and different alleles of D3S1358 loci from 1 968 individuals was analyzed after amplified by PowerPlex ® 21 detection kit. Sum of peak height of Amelogenin X allele was not less than 60% that of D3S1358 loci alleles in 90.8% female samples, and sum of peak height of Amelogenin X allele was not higher than 70% that of D3S1358 loci alleles in 94.9% male samples. The result of genotyping after amplified by PowerPlex ® 21 detection kit shows that the possibility of Y fragment deletion should be considered when only Amelogenin X-gene of Amelogenin is detected and the peak height of Amelogenin X-gene is not higher than 70% of the total peak height of D3S1358 loci. Copyright© by the Editorial Department of Journal of Forensic Medicine
Xu, Tingting; Cullen, Kathryn R.; Mueller, Bryon; Schreiner, Mindy W.; Lim, Kelvin O.; Schulz, S. Charles; Parhi, Keshab K.
2016-01-01
Borderline personality disorder (BPD) is associated with symptoms such as affect dysregulation, impaired sense of self, and self-harm behaviors. Neuroimaging research on BPD has revealed structural and functional abnormalities in specific brain regions and connections. However, little is known about the topological organizations of brain networks in BPD. We collected resting-state functional magnetic resonance imaging (fMRI) data from 20 patients with BPD and 10 healthy controls, and constructed frequency-specific functional brain networks by correlating wavelet-filtered fMRI signals from 82 cortical and subcortical regions. We employed graph-theory based complex network analysis to investigate the topological properties of the brain networks, and employed network-based statistic to identify functional dysconnections in patients. In the 0.03–0.06 Hz frequency band, compared to controls, patients with BPD showed significantly larger measures of global network topology, including the size of largest connected graph component, clustering coefficient, small-worldness, and local efficiency, indicating increased local cliquishness of the functional brain network. Compared to controls, patients showed lower nodal centrality at several hub nodes but greater centrality at several non-hub nodes in the network. Furthermore, an interconnected subnetwork in 0.03–0.06 Hz frequency band was identified that showed significantly lower connectivity in patients. The links in the subnetwork were mainly long-distance connections between regions located at different lobes; and the mean connectivity of this subnetwork was negatively correlated with the increased global topology measures. Lastly, the key network measures showed high correlations with several clinical symptom scores, and classified BPD patients against healthy controls with high accuracy based on linear discriminant analysis. The abnormal topological properties and connectivity found in this study may add new knowledge to the current understanding of functional brain networks in BPD. However, due to limitation of small sample sizes, the results of the current study should be viewed as exploratory and need to be validated on large samples in future works. PMID:26977400
Xu, Tingting; Cullen, Kathryn R; Mueller, Bryon; Schreiner, Mindy W; Lim, Kelvin O; Schulz, S Charles; Parhi, Keshab K
2016-01-01
Borderline personality disorder (BPD) is associated with symptoms such as affect dysregulation, impaired sense of self, and self-harm behaviors. Neuroimaging research on BPD has revealed structural and functional abnormalities in specific brain regions and connections. However, little is known about the topological organizations of brain networks in BPD. We collected resting-state functional magnetic resonance imaging (fMRI) data from 20 patients with BPD and 10 healthy controls, and constructed frequency-specific functional brain networks by correlating wavelet-filtered fMRI signals from 82 cortical and subcortical regions. We employed graph-theory based complex network analysis to investigate the topological properties of the brain networks, and employed network-based statistic to identify functional dysconnections in patients. In the 0.03-0.06 Hz frequency band, compared to controls, patients with BPD showed significantly larger measures of global network topology, including the size of largest connected graph component, clustering coefficient, small-worldness, and local efficiency, indicating increased local cliquishness of the functional brain network. Compared to controls, patients showed lower nodal centrality at several hub nodes but greater centrality at several non-hub nodes in the network. Furthermore, an interconnected subnetwork in 0.03-0.06 Hz frequency band was identified that showed significantly lower connectivity in patients. The links in the subnetwork were mainly long-distance connections between regions located at different lobes; and the mean connectivity of this subnetwork was negatively correlated with the increased global topology measures. Lastly, the key network measures showed high correlations with several clinical symptom scores, and classified BPD patients against healthy controls with high accuracy based on linear discriminant analysis. The abnormal topological properties and connectivity found in this study may add new knowledge to the current understanding of functional brain networks in BPD. However, due to limitation of small sample sizes, the results of the current study should be viewed as exploratory and need to be validated on large samples in future works.
Solving a Hamiltonian Path Problem with a bacterial computer
Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T
2009-01-01
Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof-of-concept experiment demonstrates that bacterial computing is a new way to address NP-complete problems using the inherent advantages of genetic systems. The results of our experiments also validate synthetic biology as a valuable approach to biological engineering. We designed and constructed basic parts, devices, and systems using synthetic biology principles of standardization and abstraction. PMID:19630940
Physics Lab Experiments and Correlated Computer Aids. Teacher Edition.
ERIC Educational Resources Information Center
Gottlieb, Herbert H.
Forty-nine physics experiments are included in the teacher's edition of this laboratory manual. Suggestions are given in margins for preparing apparatus, organizing students, and anticipating difficulties likely to be encountered. Sample data, graphs, calculations, and sample answers to leading questions are also given for each experiment. It is…
CLUSTAG: hierarchical clustering and graph methods for selecting tag SNPs.
Ao, S I; Yip, Kevin; Ng, Michael; Cheung, David; Fong, Pui-Yee; Melhado, Ian; Sham, Pak C
2005-04-15
Cluster and set-cover algorithms are developed to obtain a set of tag single nucleotide polymorphisms (SNPs) that can represent all the known SNPs in a chromosomal region, subject to the constraint that all SNPs must have a squared correlation R2>C with at least one tag SNP, where C is specified by the user. http://hkumath.hku.hk/web/link/CLUSTAG/CLUSTAG.html mng@maths.hku.hk.
NASA Astrophysics Data System (ADS)
Zhou, Zongzheng; Tordesillas, Antoinette
2017-06-01
The underlying microstructure and dynamics of a dense granular material as it evolves towards the "critical state", a limit state in which the system deforms with an essentially constant volume and stress ratio, remains widely debated in the micromechanics of granular media community. Strain localization, a common mechanism in the large strain regime, further complicates the characterization of this limit state. Here we revisit the evolution to this limit state within the framework of modern percolation theory. Attention is paid to motion transfer: in this context, percolation translates to the emergence of a large-scale connectivity in graphs that embody information on individual grain displacements. We construct each graph G(r) by connecting nodes, representing the grains, within a distance r in the displacement-state-space. As r increases, we observe a percolation transition on G(r). The size of the jump discontinuity increases in the lead up to failure, indicating that the nature of percolation transition changes from continuous to explosive. We attribute this to the emergence of collective motion, which manifests in increasingly isolated communities in G(r). At the limit state, where the jump discontinuity is highest and invariant across the different unjamming cycles (drops in stress ratio), G(r) encapsulates multiple kinematically distinct communities that are mediated by nodes corresponding to those grains in the shear band. This finding casts light on the dual and opposing roles of the shear band: a mechanism that creates powder keg divisions in the sample, while simultaneously acting as a mechanical link that transfers motion through such subdivisions moving in relative rigid-body motion.
Time-of-travel data for Nebraska streams, 1968 to 1977
Petri, L.R.
1984-01-01
This report documents the results of 10 time-of-travel studies, using ' dye-tracer ' methods, conducted on five streams in Nebraska during the period 1968 to 1977. Streams involved in the studies were the North Platte, North Loup, Elkhorn, and Big Blue Rivers and Salt Creek. Rhodamine WT dye in a 20 percent solution was used as the tracer for all 10 time-of-travel studies. Water samples were collected at several points below each injection site. Concentrations of dye in the samples were measured by determining fluorescence of the sample and comparing that value to fluorescence-concentration curves. Stream discharges were measured before and during each study. Results of each time-by-travel study are shown on two tables and on graph. The first table shows water discharge at injection and sampling sites, distance between sites, and time and rate of travel of the dye between sites. The second table provides descriptions of study sites, amounts of dye injected in the streams, actual sampling times, and actual concentrations of dye detected. The graphs for each time-of-travel study provide indications of changing travel rates between sampling sites, information on length of dye clouds, and times for dye passage past given points. (USGS)
Vehicle Technologies’ Fact of the Week 2013 (in English;)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Stacy Cagle; Diegel, Susan W.; Moore, Sheila A.
2014-04-01
Each week the U.S. Department of Energy’s Vehicle Technology Office (VTO) posts a Fact of the Week on their website: http://www1.eere.energy.gov/vehiclesandfuels/ . These Facts provide statistical information, usually in the form of charts and tables, on vehicle sales, fuel economy, gasoline prices, and other transportation-related trends. Each Fact is a stand-alone page that includes a graph, text explaining the significance of the data, the supporting information on which the graph was based, and the source of the data. A link to the current week’s Fact is available on the VTO homepage, but older Facts are archived and still available at:more » http://www1.eere.energy.gov/vehiclesandfuels/facts/. This report is a compilation of the Facts that were posted during calendar year 2013. The Facts were written and prepared by staff in Oak Ridge National Laboratory's Center for Transportation Analysis.« less
Vehicle Technologies' Fact of the Week 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Stacy Cagle; Diegel, Susan W; Moore, Sheila A
2013-02-01
Each week the U.S. Department of Energy s Vehicle Technology Office (VTO) posts a Fact of the Week on their website: http://www1.eere.energy.gov/vehiclesandfuels/ . These Facts provide statistical information, usually in the form of charts and tables, on vehicle sales, fuel economy, gasoline prices, and other transportation-related trends. Each Fact is a stand-alone page that includes a graph, text explaining the significance of the data, the supporting information on which the graph was based, and the source of the data. A link to the current week s Fact is available on the VTO homepage, but older Facts are archived and stillmore » available at: http://www1.eere.energy.gov/vehiclesandfuels/facts/. This report is a compilation of the Facts that were posted during calendar year 2012. The Facts were written and prepared by staff in Oak Ridge National Laboratory's Center for Transportation Analysis.« less
Vehicle Technologies Fact of the Week 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Stacy Cagle; Williams, Susan E.; Moore, Sheila A.
2014-03-01
Each week the U.S. Department of Energy s Vehicle Technology Office (VTO) posts a Fact of the Week on their website: http://www1.eere.energy.gov/vehiclesandfuels/ . These Facts provide statistical information, usually in the form of charts and tables, on vehicle sales, fuel economy, gasoline prices, and other transportation-related trends. Each Fact is a stand-alone page that includes a graph, text explaining the significance of the data, the supporting information on which the graph was based, and the source of the data. A link to the current week s Fact is available on the VTO homepage, but older Facts are archived and stillmore » available at: http://www1.eere.energy.gov/vehiclesandfuels/facts/. This report is a compilation of the Facts that were posted during calendar year 2013. The Facts were written and prepared by staff in Oak Ridge National Laboratory's Center for Transportation Analysis.« less
Entraining the topology and the dynamics of a network of phase oscillators
NASA Astrophysics Data System (ADS)
Sendiña-Nadal, I.; Leyva, I.; Buldú, J. M.; Almendral, J. A.; Boccaletti, S.
2009-04-01
We show that the topology and dynamics of a network of unsynchronized Kuramoto oscillators can be simultaneously controlled by means of a forcing mechanism which yields a phase locking of the oscillators to that of an external pacemaker in connection with the reshaping of the network’s degree distribution. The entrainment mechanism is based on the addition, at regular time intervals, of unidirectional links from oscillators that follow the dynamics of a pacemaker to oscillators in the pristine graph whose phases hold a prescribed phase relationship. Such a dynamically based rule in the attachment process leads to the emergence of a power-law shape in the final degree distribution of the graph whenever the network is entrained to the dynamics of the pacemaker. We show that the arousal of a scale-free distribution in connection with the success of the entrainment process is a robust feature, characterizing different networks’ initial configurations and parameters.
DiversePathsJ: diverse shortest paths for bioimage analysis.
Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael
2018-02-01
We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
Dynamics for a 2-vertex quantum gravity model
NASA Astrophysics Data System (ADS)
Borja, Enrique F.; Díaz-Polo, Jacobo; Garay, Iñaki; Livine, Etera R.
2010-12-01
We use the recently introduced U(N) framework for loop quantum gravity to study the dynamics of spin network states on the simplest class of graphs: two vertices linked with an arbitrary number N of edges. Such graphs represent two regions, in and out, separated by a boundary surface. We study the algebraic structure of the Hilbert space of spin networks from the U(N) perspective. In particular, we describe the algebra of operators acting on that space and discuss their relation to the standard holonomy operator of loop quantum gravity. Furthermore, we show that it is possible to make the restriction to the isotropic/homogeneous sector of the model by imposing the invariance under a global U(N) symmetry. We then propose a U(N)-invariant Hamiltonian operator and study the induced dynamics. Finally, we explore the analogies between this model and loop quantum cosmology and sketch some possible generalizations of it.
NASA Astrophysics Data System (ADS)
Indelicato, G.; Burkhard, P.; Twarock, R.
2017-04-01
We introduce here a mathematical procedure for the structural classification of a specific class of self-assembling protein nanoparticles (SAPNs) that are used as a platform for repetitive antigen display systems. These SAPNs have distinctive geometries as a consequence of the fact that their peptide building blocks are formed from two linked coiled coils that are designed to assemble into trimeric and pentameric clusters. This allows a mathematical description of particle architectures in terms of bipartite (3,5)-regular graphs. Exploiting the relation with fullerene graphs, we provide a complete atlas of SAPN morphologies. The classification enables a detailed understanding of the spectrum of possible particle geometries that can arise in the self-assembly process. Moreover, it provides a toolkit for a systematic exploitation of SAPNs in bioengineering in the context of vaccine design, predicting the density of B-cell epitopes on the SAPN surface, which is critical for a strong humoral immune response.
Generalised power graph compression reveals dominant relationship patterns in complex networks
Ahnert, Sebastian E.
2014-01-01
We introduce a framework for the discovery of dominant relationship patterns in complex networks, by compressing the networks into power graphs with overlapping power nodes. When paired with enrichment analysis of node classification terms, the most compressible sets of edges provide a highly informative sketch of the dominant relationship patterns that define the network. In addition, this procedure also gives rise to a novel, link-based definition of overlapping node communities in which nodes are defined by their relationships with sets of other nodes, rather than through connections within the community. We show that this completely general approach can be applied to undirected, directed, and bipartite networks, yielding valuable insights into the large-scale structure of real-world networks, including social networks and food webs. Our approach therefore provides a novel way in which network architecture can be studied, defined and classified. PMID:24663099
Flexibility and rigidity of cross-linked Straight Fibrils under axial motion constraints.
Nagy Kem, Gyula
2016-09-01
The Straight Fibrils are stiff rod-like filaments and play a significant role in cellular processes as structural stability and intracellular transport. Introducing a 3D mechanical model for the motion of braced cylindrical fibrils under axial motion constraint; we provide some mechanism and a graph theoretical model for fibril structures and give the characterization of the flexibility and the rigidity of this bar-and-joint spatial framework. The connectedness and the circuit of the bracing graph characterize the flexibility of these structures. In this paper, we focus on the kinematical properties of hierarchical levels of fibrils and evaluate the number of the bracing elements for the rigidity and its computational complexity. The presented model is a good characterization of the frameworks of bio-fibrils such as microtubules, cellulose, which inspired this work. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rolling Deck to Repository (R2R): Linking and Integrating Data for Oceanographic Research
NASA Astrophysics Data System (ADS)
Arko, R. A.; Chandler, C. L.; Clark, P. D.; Shepherd, A.; Moore, C.
2012-12-01
The Rolling Deck to Repository (R2R) program is developing infrastructure to ensure the underway sensor data from NSF-supported oceanographic research vessels are routinely and consistently documented, preserved in long-term archives, and disseminated to the science community. We have published the entire R2R Catalog as a Linked Data collection, making it easily accessible to encourage linking and integration with data at other repositories. We are developing the R2R Linked Data collection with specific goals in mind: 1.) We facilitate data access and reuse by providing the richest possible collection of resources to describe vessels, cruises, instruments, and datasets from the U.S. academic fleet, including data quality assessment results and clean trackline navigation. We are leveraging or adopting existing community-standard concepts and vocabularies, particularly concepts from the Biological and Chemical Oceanography Data Management Office (BCO-DMO) ontology and terms from the pan-European SeaDataNet vocabularies, and continually re-publish resources as new concepts and terms are mapped. 2.) We facilitate data citation through the entire data lifecycle from field acquisition to shoreside archiving to (ultimately) global syntheses and journal articles. We are implementing globally unique and persistent identifiers at the collection, dataset, and granule levels, and encoding these citable identifiers directly into the Linked Data resources. 3.) We facilitate linking and integration with other repositories that publish Linked Data collections for the U.S. academic fleet, such as BCO-DMO and the Index to Marine and Lacustrine Geological Samples (IMLGS). We are initially mapping datasets at the resource level, and plan to eventually implement rule-based mapping at the concept level. We work collaboratively with partner repositories to develop best practices for URI patterns and consensus on shared vocabularies. The R2R Linked Data collection is implemented as a lightweight "virtual RDF graph" generated on-the-fly from our SQL database using the D2RQ (http://d2rq.org) package. In addition to the default SPARQL endpoint for programmatic access, we are developing a Web-based interface from open-source software components that offers user-friendly browse and search.
Network analysis of mesoscale optical recordings to assess regional, functional connectivity.
Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H
2015-10-01
With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.
Fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2014-08-01
In this study, a novel fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds was developed. The biosensor was constructed by immobilizing tissue homogenate of fresh broad (Vicia faba) on to glassy carbon electrode. For the stability of the biosensor, general immobilization techniques were used to secure the fresh broad tissue homogenate in gelatin-glutaraldehyde cross-linking matrix. In the optimization and characterization studies, the amount of fresh broad tissue homogenate and gelatin, glutaraldehyde percentage, optimum pH, optimum temperature and optimum buffer concentration, thermal stability, interference effects, linear range, storage stability, repeatability and sample applications (Wine, beer, fruit juices) were also investigated. Besides, the detection ranges of thirteen phenolic compounds were obtained with the help of the calibration graphs. A typical calibration curve for the sensor revealed a linear range of 5-60 μM catechol. In reproducibility studies, variation coefficient (CV) and standard deviation (SD) were calculated as 1.59%, 0.64×10(-3) μM, respectively.
16 CFR 2.7 - Compulsory process in investigations.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., graphs, charts, photographs, sound recordings, images and other data or data compilations stored in any... other tangible things, for inspection, copying, testing, or sampling. (j) Manner and form of production...
16 CFR 2.7 - Compulsory process in investigations.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., graphs, charts, photographs, sound recordings, images and other data or data compilations stored in any... other tangible things, for inspection, copying, testing, or sampling. (j) Manner and form of production...
Graph Theory Approach for Studying Food Webs
NASA Astrophysics Data System (ADS)
Longjas, A.; Tejedor, A.; Foufoula-Georgiou, E.
2017-12-01
Food webs are complex networks of feeding interactions among species in ecological communities. Metrics describing food web structure have been proposed to compare and classify food webs ranging from food chain length, connectance, degree distribution, centrality measures, to the presence of motifs (distinct compartments), among others. However, formal methodologies for studying both food web topology and the dynamic processes operating on them are still lacking. Here, we utilize a quantitative framework using graph theory within which a food web is represented by a directed graph, i.e., a collection of vertices (species or trophic species defined as sets of species sharing the same predators and prey) and directed edges (predation links). This framework allows us to identify apex (environmental "source" node) to outlet (top predators) subnetworks and compute the steady-state flux (e.g., carbon, nutrients, energy etc.) in the food web. We use this framework to (1) construct vulnerability maps that quantify the relative change of flux delivery to the top predators in response to perturbations in prey species (2) identify keystone species, whose loss would precipitate further species extinction, and (3) introduce a suite of graph-theoretic metrics to quantify the topologic (imposed by food web connectivity) and dynamic (dictated by the flux partitioning and distribution) components of a food web's complexity. By projecting food webs into a 2D Topodynamic Complexity Space whose coordinates are given by Number of alternative paths (topologic) and Leakage Index (dynamic), we show that this space provides a basis for food web comparison and provide physical insights into their dynamic behavior.
Groupwise Image Registration Guided by a Dynamic Digraph of Images.
Tang, Zhenyu; Fan, Yong
2016-04-01
For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.
Networks of genetic loci and the scientific literature
NASA Astrophysics Data System (ADS)
Semeiks, J. R.; Grate, L. R.; Mian, I. S.
This work considers biological information graphs, networks in which nodes corre-spond to genetic loci (or "genes") and an (undirected) edge signifies that two genes are discussed in the same article(s) in the scientific literature ("documents"). Operations that utilize the topology of these graphs can assist researchers in the scientific discovery process. For example, a shortest path between two nodes defines an ordered series of genes and documents that can be used to explore the relationship(s) between genes of interest. This work (i) describes how topologies in which edges are likely to reflect genuine relationship(s) can be constructed from human-curated corpora of genes an-notated with documents (or vice versa), and (ii) illustrates the potential of biological information graphs in synthesizing knowledge in order to formulate new hypotheses and generate novel predictions for subsequent experimental study. In particular, the well-known LocusLink corpus is used to construct a biological information graph consisting of 10,297 nodes and 21,910 edges. The large-scale statistical properties of this gene-document network suggest that it is a new example of a power-law network. The segregation of genes on the basis of species and encoded protein molecular function indicate the presence of assortativity, the preference for nodes with similar attributes to be neighbors in a network. The practical utility of a gene-document network is illustrated by using measures such as shortest paths and centrality to analyze a subset of nodes corresponding to genes implicated in aging. Each release of a curated biomedical corpus defines a particular static graph. The topology of a gene-document network changes over time as curators add and/or remove nodes and/or edges. Such a dynamic, evolving corpus provides both the foundation for analyzing the growth and behavior of large complex networks and a substrate for examining trends in biological research.
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
Tracking Research Data Footprints via Integration with Research Graph
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Wang, J.; Aryani, A.; Conlon, M.; Wyborn, L. A.; Choudhury, S. A.
2017-12-01
The researcher of today is likely to be part of a team that will use subsets of data from at least one, if not more external repositories, and that same data could be used by multiple researchers for many different purposes. At best, the repositories that host this data will know who is accessing their data, but rarely what they are using it for, resulting in funders of data collecting programs and data repositories that store the data unlikely to know: 1) which research funding contributed to the collection and preservation of a dataset, and 2) which data contributed to high impact research and publications. In days of funding shortages there is a growing need to be able to trace the footprint a data set from the originator that collected the data to the repository that stores the data and ultimately to any derived publications. The Research Data Alliance's Data Description Registry Interoperability Working Group (DDRIWG) has addressed this problem through the development of a distributed graph, called Research Graph that can map each piece of the research interaction puzzle by building aggregated graphs. It can connect datasets on the basis of co-authorship or other collaboration models such as joint funding and grants and can connect research datasets, publications, grants and researcher profiles across research repositories and infrastructures such as DataCite and ORCID. National Computational Infrastructure (NCI) in Australia is one of the early adopters of Research Graph. The graphic view and quantitative analysis helps NCI track the usage of their National reference data collections thus quantifying the role that these NCI-hosted data assets play within the funding-researcher-data-publication-cycle. The graph can unlock the complex interactions of the research projects by tracking the contribution of datasets, the various funding bodies and the downstream data users. RMap Project is a similar initiative which aims to solve complex relationships among scholarly publications and their underlying data, including IEEE publications. It is hoped to combine RMap and Research Graph in the near futures and also to add physical samples to Research Graph.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
Yadage and Packtivity - analysis preservation using parametrized workflows
NASA Astrophysics Data System (ADS)
Cranmer, Kyle; Heinrich, Lukas
2017-10-01
Preserving data analyses produced by the collaborations at LHC in a parametrized fashion is crucial in order to maintain reproducibility and re-usability. We argue for a declarative description in terms of individual processing steps - “packtivities” - linked through a dynamic directed acyclic graph (DAG) and present an initial set of JSON schemas for such a description and an implementation - “yadage” - capable of executing workflows of analysis preserved via Linux containers.
A Survey of Visualization Tools Assessed for Anomaly-Based Intrusion Detection Analysis
2014-04-01
objective? • What vulnerabilities exist in the target system? • What damage or other consequences are likely? • What exploit scripts or other attack...languages C, R, and Python; no response capabilities. JUNG https://blogs.reucon.com/asterisk- java /tag/visualization/ Create custom layouts and can...annotate graphs, links, nodes with any Java data type. Must be familiar with coding in Java to call the routines; no monitoring or response
KOJAK: Scalable Semantic Link Discovery Via Integrated Knowledge-Based and Statistical Reasoning
2006-11-01
program can find interesting connections in a network without having to learn the patterns of interestingness beforehand. The key advantage of our...Interesting Instances in Semantic Graphs Below we describe how the UNICORN framework can discover interesting instances in a multi-relational dataset...We can now describe how UNICORN solves the first problem of finding the top interesting nodes in a semantic net by ranking them according to
2016-06-22
this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between
NASA Astrophysics Data System (ADS)
Keller, Stacy Kathryn
This study examined how intermediate elementary students' mathematics and science background knowledge affected their interpretation of line graphs and how their interpretations were affected by graph question levels. A purposive sample of 14 6th-grade students engaged in think aloud interviews (Ericsson & Simon, 1993) while completing an excerpted Test of Graphing in Science (TOGS) (McKenzie & Padilla, 1986). Hand gestures were video recorded. Student performance on the TOGS was assessed using an assessment rubric created from previously cited factors affecting students' graphing ability. Factors were categorized using Bertin's (1983) three graph question levels. The assessment rubric was validated by Padilla and a veteran mathematics and science teacher. Observational notes were also collected. Data were analyzed using Roth and Bowen's semiotic process of reading graphs (2001). Key findings from this analysis included differences in the use of heuristics, self-generated questions, science knowledge, and self-motivation. Students with higher prior achievement used a greater number and variety of heuristics and more often chose appropriate heuristics. They also monitored their understanding of the question and the adequacy of their strategy and answer by asking themselves questions. Most used their science knowledge spontaneously to check their understanding of the question and the adequacy of their answers. Students with lower and moderate prior achievement favored one heuristic even when it was not useful for answering the question and rarely asked their own questions. In some cases, if students with lower prior achievement had thought about their answers in the context of their science knowledge, they would have been able to recognize their errors. One student with lower prior achievement motivated herself when she thought the questions were too difficult. In addition, students answered the TOGS in one of three ways: as if they were mathematics word problems, science data to be analyzed, or they were confused and had to guess. A second set of findings corroborated how science background knowledge affected graph interpretation: correct science knowledge supported students' reasoning, but it was not necessary to answer any question correctly; correct science knowledge could not compensate for incomplete mathematics knowledge; and incorrect science knowledge often distracted students when they tried to use it while answering a question. Finally, using Roth and Bowen's (2001) two-stage semiotic model of reading graphs, representative vignettes showed emerging patterns from the study. This study added to our understanding of the role of science content knowledge during line graph interpretation, highlighted the importance of heuristics and mathematics procedural knowledge, and documented the importance of perception attentions, motivation, and students' self-generated questions. Recommendations were made for future research in line graph interpretation in mathematics and science education and for improving instruction in this area.
Percolation threshold determines the optimal population density for public cooperation
NASA Astrophysics Data System (ADS)
Wang, Zhen; Szolnoki, Attila; Perc, Matjaž
2012-03-01
While worldwide census data provide statistical evidence that firmly link the population density with several indicators of social welfare, the precise mechanisms underlying these observations are largely unknown. Here we study the impact of population density on the evolution of public cooperation in structured populations and find that the optimal density is uniquely related to the percolation threshold of the host graph irrespective of its topological details. We explain our observations by showing that spatial reciprocity peaks in the vicinity of the percolation threshold, when the emergence of a giant cooperative cluster is hindered neither by vacancy nor by invading defectors, thus discovering an intuitive yet universal law that links the population density with social prosperity.
Acar, Evrim; Plopper, George E.; Yener, Bülent
2012-01-01
The structure/function relationship is fundamental to our understanding of biological systems at all levels, and drives most, if not all, techniques for detecting, diagnosing, and treating disease. However, at the tissue level of biological complexity we encounter a gap in the structure/function relationship: having accumulated an extraordinary amount of detailed information about biological tissues at the cellular and subcellular level, we cannot assemble it in a way that explains the correspondingly complex biological functions these structures perform. To help close this information gap we define here several quantitative temperospatial features that link tissue structure to its corresponding biological function. Both histological images of human tissue samples and fluorescence images of three-dimensional cultures of human cells are used to compare the accuracy of in vitro culture models with their corresponding human tissues. To the best of our knowledge, there is no prior work on a quantitative comparison of histology and in vitro samples. Features are calculated from graph theoretical representations of tissue structures and the data are analyzed in the form of matrices and higher-order tensors using matrix and tensor factorization methods, with a goal of differentiating between cancerous and healthy states of brain, breast, and bone tissues. We also show that our techniques can differentiate between the structural organization of native tissues and their corresponding in vitro engineered cell culture models. PMID:22479315
DGEM--a microarray gene expression database for primary human disease tissues.
Xia, Yuni; Campen, Andrew; Rigsby, Dan; Guo, Ying; Feng, Xingdong; Su, Eric W; Palakal, Mathew; Li, Shuyu
2007-01-01
Gene expression patterns can reflect gene regulations in human tissues under normal or pathologic conditions. Gene expression profiling data from studies of primary human disease samples are particularly valuable since these studies often span many years in order to collect patient clinical information and achieve a large sample size. Disease-to-Gene Expression Mapper (DGEM) provides a beneficial community resource to access and analyze these data; it currently includes Affymetrix oligonucleotide array datasets for more than 40 human diseases and 1400 samples. The data are normalized to the same scale and stored in a relational database. A statistical-analysis pipeline was implemented to identify genes abnormally expressed in disease tissues or genes whose expressions are associated with clinical parameters such as cancer patient survival. Data-mining results can be queried through a web-based interface at http://dgem.dhcp.iupui.edu/. The query tool enables dynamic generation of graphs and tables that are further linked to major gene and pathway resources that connect the data to relevant biology, including Entrez Gene and Kyoto Encyclopedia of Genes and Genomes (KEGG). In summary, DGEM provides scientists and physicians a valuable tool to study disease mechanisms, to discover potential disease biomarkers for diagnosis and prognosis, and to identify novel gene targets for drug discovery. The source code is freely available for non-profit use, on request to the authors.
Exploring variation-aware contig graphs for (comparative) metagenomics using MaryGold
Nijkamp, Jurgen F.; Pop, Mihai; Reinders, Marcel J. T.; de Ridder, Dick
2013-01-01
Motivation: Although many tools are available to study variation and its impact in single genomes, there is a lack of algorithms for finding such variation in metagenomes. This hampers the interpretation of metagenomics sequencing datasets, which are increasingly acquired in research on the (human) microbiome, in environmental studies and in the study of processes in the production of foods and beverages. Existing algorithms often depend on the use of reference genomes, which pose a problem when a metagenome of a priori unknown strain composition is studied. In this article, we develop a method to perform reference-free detection and visual exploration of genomic variation, both within a single metagenome and between metagenomes. Results: We present the MaryGold algorithm and its implementation, which efficiently detects bubble structures in contig graphs using graph decomposition. These bubbles represent variable genomic regions in closely related strains in metagenomic samples. The variation found is presented in a condensed Circos-based visualization, which allows for easy exploration and interpretation of the found variation. We validated the algorithm on two simulated datasets containing three respectively seven Escherichia coli genomes and showed that finding allelic variation in these genomes improves assemblies. Additionally, we applied MaryGold to publicly available real metagenomic datasets, enabling us to find within-sample genomic variation in the metagenomes of a kimchi fermentation process, the microbiome of a premature infant and in microbial communities living on acid mine drainage. Moreover, we used MaryGold for between-sample variation detection and exploration by comparing sequencing data sampled at different time points for both of these datasets. Availability: MaryGold has been written in C++ and Python and can be downloaded from http://bioinformatics.tudelft.nl/software Contact: d.deridder@tudelft.nl PMID:24058058
NASA Astrophysics Data System (ADS)
Arosio, Marcello; Martina, Mario L. V.
2017-04-01
In the last years, the relations and interactions between multi-hazards, vulnerability, exposure and resilience spheres are assuming more and more attention and the scientific community recognized that they are very dynamic, complex and interconnected. The traditional approaches define risk as the potential economic, social and environmental consequences due to a hazardous phenomenon in a specific period. Although there have been major improvements in recent years, there are still some limitation in term of a holistic approach that is able to include the emergent value hidden in the relation and interaction between the different spheres. Furthermore, the emergent behaviour of a society makes the collective risk greater than the sum of the parts and this requires a holistic, systematic and integrated approach. For this reason, it is important to consider the connections between elements to assess properly the vulnerability of systems. In a system (e.g. road, hospital and ecological network, etc.), or in a System of System (e.g. socio-technical urban service), there are critical elements that, beyond the intrinsic vulnerability, can be characterize by greater or lower vulnerability because of their physical, geographical, cyber or logical connections. To understand the system response to a perturbation, and therefore its resilience, is necessary not only to represent but also to quantify the relative importance of the elements and their interconnections. To this aim, we propose an innovative approach in the field of natural risk assessment based on the properties of graph G=(N,L). A graph consists of two sets N (nodes) and L (links): the nodes represent the single exposed elements (physical, social, environmental, etc.) to a hazard, while the links (or connections) represent the interaction between the elements. This approach encourages the risk assessment to a new prospective: from reductionist to holistic. The final goal is to provide insight in understanding how to quantify integrated collective vulnerability, resilience and risk.
Overlapping communities from dense disjoint and high total degree clusters
NASA Astrophysics Data System (ADS)
Zhang, Hongli; Gao, Yang; Zhang, Yue
2018-04-01
Community plays an important role in the field of sociology, biology and especially in domains of computer science, where systems are often represented as networks. And community detection is of great importance in the domains. A community is a dense subgraph of the whole graph with more links between its members than between its members to the outside nodes, and nodes in the same community probably share common properties or play similar roles in the graph. Communities overlap when nodes in a graph belong to multiple communities. A vast variety of overlapping community detection methods have been proposed in the literature, and the local expansion method is one of the most successful techniques dealing with large networks. The paper presents a density-based seeding method, in which dense disjoint local clusters are searched and selected as seeds. The proposed method selects a seed by the total degree and density of local clusters utilizing merely local structures of the network. Furthermore, this paper proposes a novel community refining phase via minimizing the conductance of each community, through which the quality of identified communities is largely improved in linear time. Experimental results in synthetic networks show that the proposed seeding method outperforms other seeding methods in the state of the art and the proposed refining method largely enhances the quality of the identified communities. Experimental results in real graphs with ground-truth communities show that the proposed approach outperforms other state of the art overlapping community detection algorithms, in particular, it is more than two orders of magnitude faster than the existing global algorithms with higher quality, and it obtains much more accurate community structure than the current local algorithms without any priori information.
MMKG: An approach to generate metallic materials knowledge graph based on DBpedia and Wikipedia
NASA Astrophysics Data System (ADS)
Zhang, Xiaoming; Liu, Xin; Li, Xin; Pan, Dongyu
2017-02-01
The research and development of metallic materials are playing an important role in today's society, and in the meanwhile lots of metallic materials knowledge is generated and available on the Web (e.g., Wikipedia) for materials experts. However, due to the diversity and complexity of metallic materials knowledge, the knowledge utilization may encounter much inconvenience. The idea of knowledge graph (e.g., DBpedia) provides a good way to organize the knowledge into a comprehensive entity network. Therefore, the motivation of our work is to generate a metallic materials knowledge graph (MMKG) using available knowledge on the Web. In this paper, an approach is proposed to build MMKG based on DBpedia and Wikipedia. First, we use an algorithm based on directly linked sub-graph semantic distance (DLSSD) to preliminarily extract metallic materials entities from DBpedia according to some predefined seed entities; then based on the results of the preliminary extraction, we use an algorithm, which considers both semantic distance and string similarity (SDSS), to achieve the further extraction. Second, due to the absence of materials properties in DBpedia, we use an ontology-based method to extract properties knowledge from the HTML tables of corresponding Wikipedia Web pages for enriching MMKG. Materials ontology is used to locate materials properties tables as well as to identify the structure of the tables. The proposed approach is evaluated by precision, recall, F1 and time performance, and meanwhile the appropriate thresholds for the algorithms in our approach are determined through experiments. The experimental results show that our approach returns expected performance. A tool prototype is also designed to facilitate the process of building the MMKG as well as to demonstrate the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Tejedor, A.; Longjas, A.; Foufoula-Georgiou, E.
2017-12-01
Previous work [e.g. Tejedor et al., 2016 - GRL] has demonstrated the potential of using graph theory to study key properties of the structure and dynamics of river delta channel networks. Although the distribution of fluxes in river deltas is mostly driven by the connectivity of its channel network a significant part of the fluxes might also arise from connectivity between the channels and islands due to overland flow and seepage. This channel-island-subsurface interaction creates connectivity pathways which facilitate or inhibit transport depending on their degree of coupling. The question we pose here is how to collectively study system connectivity that emerges from the aggregated action of different processes (different in nature, intensity and time scales). Single-layer graphs as those introduced for delta channel networks are inadequate as they lack the ability to represent coupled processes, and neglecting across-process interactions can lead to mis-representation of the overall system dynamics. We present here a framework that generalizes the traditional representation of networks (single-layer graphs) to the so-called multi-layer networks or multiplex. A multi-layer network conceptualizes the overall connectivity arising from different processes as distinct graphs (layers), while allowing at the same time to represent interactions between layers by introducing interlayer links (across process interactions). We illustrate this framework using a study of the joint connectivity that arises from the coupling of the confined flow on the channel network and the overland flow on islands, on a prototype delta. We show the potential of the multi-layer framework to answer quantitatively questions related to the characteristic time scales to steady-state transport in the system as a whole when different levels of channel-island coupling are modulated by different magnitudes of discharge rates.
Impact damage in composite plates
NASA Technical Reports Server (NTRS)
Shahid, I.; Lee, S.; Chang, F. K.; Shah, B. M.
1995-01-01
The objective of this research paper was to link two computer codes, PDCOMP (for Progressive Damage Analysis for Laminated Composites) and 3DIMPACT (for the prediction of the extent of delaminations in laminated composites resulting from point impact loads), in order to predict impact damage by taking into account local damage and material degradation and to estimate residual stiffness of composites after impact. The resulting graphs and analysis versus test results are presented along with the conclusive results of the codes' performances.
Data Mining of Extremely Large Ad-Hoc Data Sets to Produce Reverse Web-Link Graphs
2017-03-01
in most of the MR cases. From these studies , we also learned that computing -optimized instances should be chosen for serialized/compressed input data...maximum 200 words) Data mining can be a valuable tool, particularly in the acquisition of military intelligence. As the second study within a larger Naval...open web crawler data set Common Crawl. Similar to previous studies , this research employs MapReduce (MR) for sorting and categorizing output value
2012-12-01
6 The term psychophysical refers to the link between physical stimulus and psychological changes. Rose McDermott, Risk Taking in International...Foreign Policy., Barbara Farnham, “Roosevelt and the Munich Crisis: Insights from Prospect Theory,” Political Psychology 13, no. 2 (1992)., Mark L. Haas...gains.170 Losses hurt worse from a psychological standpoint than gains feel good. Figure 9. Prospect Value Graph (From Jervis, 1992
A Comparison of a Relational and Nested-Relational IDEF0 Data Model
1990-03-01
develop, some of the problems inherent iu the hierarchical 5 model were circumvented by the more sophisticated network model. Like the hierarchical model...uetwork database consists of a collection of records connected via links. Unlike the hierarchical model, the network model allows arbitrary graphs as...opposed to trees. Thus, each node may have everal owners and may, in turn, own any number of other records. The network model provides a lchanism by
Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms
Rechner, Steffen; Berger, Annabell
2016-01-01
We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442
Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973
Westfall, Arthur O.
1976-01-01
A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Constructing Temporally Extended Actions through Incremental Community Detection
Li, Ge
2018-01-01
Hierarchical reinforcement learning works on temporally extended actions or skills to facilitate learning. How to automatically form such abstraction is challenging, and many efforts tackle this issue in the options framework. While various approaches exist to construct options from different perspectives, few of them concentrate on options' adaptability during learning. This paper presents an algorithm to create options and enhance their quality online. Both aspects operate on detected communities of the learning environment's state transition graph. We first construct options from initial samples as the basis of online learning. Then a rule-based community revision algorithm is proposed to update graph partitions, based on which existing options can be continuously tuned. Experimental results in two problems indicate that options from initial samples may perform poorly in more complex environments, and our presented strategy can effectively improve options and get better results compared with flat reinforcement learning. PMID:29849543
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification
NASA Astrophysics Data System (ADS)
Wang, X. P.; Hu, Y.; Chen, J.
2018-04-01
Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.
Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred
Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less
Graphs, matrices, and the GraphBLAS: Seven good reasons
Kepner, Jeremy; Bader, David; Buluç, Aydın; ...
2015-01-01
The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implementmore » a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.« less
Adjusting protein graphs based on graph entropy.
Peng, Sheng-Lung; Tsay, Yu-Wei
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid.
Adjusting protein graphs based on graph entropy
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid. PMID:25474347
NASA Technical Reports Server (NTRS)
Markert, Kel; Ashmall, William; Johnson, Gary; Saah, David; Mollicone, Danilo; Diaz, Alfonso Sanchez-Paus; Anderson, Eric; Flores, Africa; Griffin, Robert
2017-01-01
Collect Earth Online (CEO) is a free and open online implementation of the FAO Collect Earth system for collaboratively collecting environmental data through the visual interpretation of Earth observation imagery. The primary collection mechanism in CEO is human interpretation of land surface characteristics in imagery served via Web Map Services (WMS). However, interpreters may not have enough contextual information to classify samples by only viewing the imagery served via WMS, be they high resolution or otherwise. To assist in the interpretation and collection processes in CEO, SERVIR, a joint NASA-USAID initiative that brings Earth observations to improve environmental decision making in developing countries, developed the GeoDash system, an embedded and critical component of CEO. GeoDash leverages Google Earth Engine (GEE) by allowing users to set up custom browser-based widgets that pull from GEE's massive public data catalog. These widgets can be quick looks of other satellite imagery, time series graphs of environmental variables, and statistics panels of the same. Users can customize widgets with any of GEE's image collections, such as the historical Landsat collection with data available since the 1970s, select date ranges, image stretch parameters, graph characteristics, and create custom layouts, all on-the-fly to support plot interpretation in CEO. This presentation focuses on the implementation and potential applications, including the back-end links to GEE and the user interface with custom widget building. GeoDash takes large data volumes and condenses them into meaningful, relevant information for interpreters. While designed initially with national and global forest resource assessments in mind, the system will complement disaster assessments, agriculture management, project monitoring and evaluation, and more.
NASA Astrophysics Data System (ADS)
Markert, K. N.; Ashmall, W.; Johnson, G.; Saah, D. S.; Anderson, E.; Flores Cordova, A. I.; Díaz, A. S. P.; Mollicone, D.; Griffin, R.
2017-12-01
Collect Earth Online (CEO) is a free and open online implementation of the FAO Collect Earth system for collaboratively collecting environmental data through the visual interpretation of Earth observation imagery. The primary collection mechanism in CEO is human interpretation of land surface characteristics in imagery served via Web Map Services (WMS). However, interpreters may not have enough contextual information to classify samples by only viewing the imagery served via WMS, be they high resolution or otherwise. To assist in the interpretation and collection processes in CEO, SERVIR, a joint NASA-USAID initiative that brings Earth observations to improve environmental decision making in developing countries, developed the GeoDash system, an embedded and critical component of CEO. GeoDash leverages Google Earth Engine (GEE) by allowing users to set up custom browser-based widgets that pull from GEE's massive public data catalog. These widgets can be quick looks of other satellite imagery, time series graphs of environmental variables, and statistics panels of the same. Users can customize widgets with any of GEE's image collections, such as the historical Landsat collection with data available since the 1970s, select date ranges, image stretch parameters, graph characteristics, and create custom layouts, all on-the-fly to support plot interpretation in CEO. This presentation focuses on the implementation and potential applications, including the back-end links to GEE and the user interface with custom widget building. GeoDash takes large data volumes and condenses them into meaningful, relevant information for interpreters. While designed initially with national and global forest resource assessments in mind, the system will complement disaster assessments, agriculture management, project monitoring and evaluation, and more.
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
SBEToolbox: A Matlab Toolbox for Biological Network Analysis
Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J.
2013-01-01
We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases. PMID:24027418
SBEToolbox: A Matlab Toolbox for Biological Network Analysis.
Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J
2013-01-01
We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases.
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matulef, Kevin Michael
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewermore » resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.« less
Disconnection of network hubs and cognitive impairment after traumatic brain injury.
Fagerholm, Erik D; Hellyer, Peter J; Scott, Gregory; Leech, Robert; Sharp, David J
2015-06-01
Traumatic brain injury affects brain connectivity by producing traumatic axonal injury. This disrupts the function of large-scale networks that support cognition. The best way to describe this relationship is unclear, but one elegant approach is to view networks as graphs. Brain regions become nodes in the graph, and white matter tracts the connections. The overall effect of an injury can then be estimated by calculating graph metrics of network structure and function. Here we test which graph metrics best predict the presence of traumatic axonal injury, as well as which are most highly associated with cognitive impairment. A comprehensive range of graph metrics was calculated from structural connectivity measures for 52 patients with traumatic brain injury, 21 of whom had microbleed evidence of traumatic axonal injury, and 25 age-matched controls. White matter connections between 165 grey matter brain regions were defined using tractography, and structural connectivity matrices calculated from skeletonized diffusion tensor imaging data. This technique estimates injury at the centre of tract, but is insensitive to damage at tract edges. Graph metrics were calculated from the resulting connectivity matrices and machine-learning techniques used to select the metrics that best predicted the presence of traumatic brain injury. In addition, we used regularization and variable selection via the elastic net to predict patient behaviour on tests of information processing speed, executive function and associative memory. Support vector machines trained with graph metrics of white matter connectivity matrices from the microbleed group were able to identify patients with a history of traumatic brain injury with 93.4% accuracy, a result robust to different ways of sampling the data. Graph metrics were significantly associated with cognitive performance: information processing speed (R(2) = 0.64), executive function (R(2) = 0.56) and associative memory (R(2) = 0.25). These results were then replicated in a separate group of patients without microbleeds. The most influential graph metrics were betweenness centrality and eigenvector centrality, which provide measures of the extent to which a given brain region connects other regions in the network. Reductions in betweenness centrality and eigenvector centrality were particularly evident within hub regions including the cingulate cortex and caudate. Our results demonstrate that betweenness centrality and eigenvector centrality are reduced within network hubs, due to the impact of traumatic axonal injury on network connections. The dominance of betweenness centrality and eigenvector centrality suggests that cognitive impairment after traumatic brain injury results from the disconnection of network hubs by traumatic axonal injury. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.
Data visualization, bar naked: A free tool for creating interactive graphics.
Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M
2017-12-15
Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Feasibility of using the linac real-time log data for VMAT treatment verification
NASA Astrophysics Data System (ADS)
Midi, N. S.; Zin, Hafiz M.
2017-05-01
This study investigates the feasibility of using the real-time log data from a linac to verify Volumetric Modulated Arc Therapy (VMAT) treatment. The treatment log data for an Elekta Synergy linac can be recorded at a sampling rate of 4 Hz using the service graphing tool on the linac control computer. A treatment plan that simulates a VMAT treatment was delivered from the linac and all the dynamic treatment parameters including monitor unit (MU), Multileaf Collimator (MLC) position, jaw position, gantry angle and collimator angle were recorded in real-time using the service graphing tool. The recorded raw data were extracted and analysed using algorithms written in Matlab (MathWorks, Natick, MA). The actual treatment parameters logged using the service graphing tool was compared to the prescription and the deviations were analysed. The MLC position errors travelling at the speed range from -3.25 to 5.92 cm/s were between -1.7 mm to 2.5 mm, well within the 3.5 mm tolerance value (AAPM TG-142). The discrepancies of other delivery parameters were also within the tolerance. The real-time linac parameters logged using the service graphing tool can be used as a supplementary data for patient specific VMAT pre-treatment quality assurance.
Adaptive graph-based multiple testing procedures
Klinglmueller, Florian; Posch, Martin; Koenig, Franz
2016-01-01
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733
Feature Grouping and Selection Over an Undirected Graph.
Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping
2012-01-01
High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Predicting catastrophes of non-autonomous networks with visibility graphs and horizontal visibility
NASA Astrophysics Data System (ADS)
Zhang, Haicheng; Xu, Daolin; Wu, Yousheng
2018-05-01
Prediction of potential catastrophes in engineering systems is a challenging problem. We first attempt to construct a complex network to predict catastrophes of a multi-modular floating system in advance of their occurrences. Response time series of the system can be mapped into an virtual network by using visibility graph or horizontal visibility algorithm. The topology characteristics of the networks can be used to forecast catastrophes of the system. Numerical results show that there is an obvious corresponding relationship between the variation of topology characteristics and the onset of catastrophes. A Catastrophe Index (CI) is proposed as a numerical indicator to measure a qualitative change from a stable state to a catastrophic state. The two approaches, the visibility graph and horizontal visibility algorithms, are compared by using the index in the reliability analysis with different data lengths and sampling frequencies. The technique of virtual network method is potentially extendable to catastrophe predictions of other engineering systems.
A SPECTRAL GRAPH APPROACH TO DISCOVERING GENETIC ANCESTRY1
Lee, Ann B.; Luca, Diana; Roeder, Kathryn
2010-01-01
Mapping human genetic variation is fundamentally interesting in fields such as anthropology and forensic inference. At the same time, patterns of genetic diversity confound efforts to determine the genetic basis of complex disease. Due to technological advances, it is now possible to measure hundreds of thousands of genetic variants per individual across the genome. Principal component analysis (PCA) is routinely used to summarize the genetic similarity between subjects. The eigenvectors are interpreted as dimensions of ancestry. We build on this idea using a spectral graph approach. In the process we draw on connections between multidimensional scaling and spectral kernel methods. Our approach, based on a spectral embedding derived from the normalized Laplacian of a graph, can produce more meaningful delineation of ancestry than by using PCA. The method is stable to outliers and can more easily incorporate different similarity measures of genetic data than PCA. We illustrate a new algorithm for genetic clustering and association analysis on a large, genetically heterogeneous sample. PMID:20689656
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
Characterizing Containment and Related Classes of Graphs,
1985-01-01
Math . to appear. [G2] Golumbic,. Martin C., D. Rotem and J. Urrutia. "Comparability graphs and intersection graphs" Discrete Math . 43 (1983) 37-40. [G3...intersection classes of graphs" Discrete Math . to appear. [S2] Scheinerman, Edward R. Intersection Classes and Multiple Intersection Parameters of Graphs...graphs and of interval graphs" Canad. Jour. of blath. 16 (1964) 539-548. [G1] Golumbic, Martin C. "Containment graphs: and. intersection graphs" Discrete
On spatial coalescents with multiple mergers in two dimensions.
Heuer, Benjamin; Sturm, Anja
2013-08-01
We consider the genealogy of a sample of individuals taken from a spatially structured population when the variance of the offspring distribution is relatively large. The space is structured into discrete sites of a graph G. If the population size at each site is large, spatial coalescents with multiple mergers, so called spatial Λ-coalescents, for which ancestral lines migrate in space and coalesce according to some Λ-coalescent mechanism, are shown to be appropriate approximations to the genealogy of a sample of individuals. We then consider as the graph G the two dimensional torus with side length 2L+1 and show that as L tends to infinity, and time is rescaled appropriately, the partition structure of spatial Λ-coalescents of individuals sampled far enough apart converges to the partition structure of a non-spatial Kingman coalescent. From a biological point of view this means that in certain circumstances both the spatial structure as well as larger variances of the underlying offspring distribution are harder to detect from the sample. However, supplemental simulations show that for moderately large L the different structure is still evident. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Joyner, Jeane; Leiva, Miriam
1988-01-01
Plastic Easter eggs are useful devices for teaching basic mathematics skills, from counting activities to graphing. Eggs are used to reinforce addition, subtraction, and multiplication skills; column addition, estimation, statistics, and other topics are introduced. Sample activities are described. (JL)
Hegarty, Peter; Lemieux, Anthony F; McQueen, Grant
2010-03-01
Graphs seem to connote facts more than words or tables do. Consequently, they seem unlikely places to spot implicit sexism at work. Yet, in 6 studies (N = 741), women and men constructed (Study 1) and recalled (Study 2) gender difference graphs with men's data first, and graphed powerful groups (Study 3) and individuals (Study 4) ahead of weaker ones. Participants who interpreted graph order as evidence of author "bias" inferred that the author graphed his or her own gender group first (Study 5). Women's, but not men's, preferences to graph men first were mitigated when participants graphed a difference between themselves and an opposite-sex friend prior to graphing gender differences (Study 6). Graph production and comprehension are affected by beliefs and suppositions about the groups represented in graphs to a greater degree than cognitive models of graph comprehension or realist models of scientific thinking have yet acknowledged.
Quantification of three-dimensional cell-mediated collagen remodeling using graph theory.
Bilgin, Cemal Cagatay; Lund, Amanda W; Can, Ali; Plopper, George E; Yener, Bülent
2010-09-30
Cell cooperation is a critical event during tissue development. We present the first precise metrics to quantify the interaction between mesenchymal stem cells (MSCs) and extra cellular matrix (ECM). In particular, we describe cooperative collagen alignment process with respect to the spatio-temporal organization and function of mesenchymal stem cells in three dimensions. We defined two precise metrics: Collagen Alignment Index and Cell Dissatisfaction Level, for quantitatively tracking type I collagen and fibrillogenesis remodeling by mesenchymal stem cells over time. Computation of these metrics was based on graph theory and vector calculus. The cells and their three dimensional type I collagen microenvironment were modeled by three dimensional cell-graphs and collagen fiber organization was calculated from gradient vectors. With the enhancement of mesenchymal stem cell differentiation, acceleration through different phases was quantitatively demonstrated. The phases were clustered in a statistically significant manner based on collagen organization, with late phases of remodeling by untreated cells clustering strongly with early phases of remodeling by differentiating cells. The experiments were repeated three times to conclude that the metrics could successfully identify critical phases of collagen remodeling that were dependent upon cooperativity within the cell population. Definition of early metrics that are able to predict long-term functionality by linking engineered tissue structure to function is an important step toward optimizing biomaterials for the purposes of regenerative medicine.
Small-World Brain Networks Revisited
Bassett, Danielle S.; Bullmore, Edward T.
2016-01-01
It is nearly 20 years since the concept of a small-world network was first quantitatively defined, by a combination of high clustering and short path length; and about 10 years since this metric of complex network topology began to be widely applied to analysis of neuroimaging and other neuroscience data as part of the rapid growth of the new field of connectomics. Here, we review briefly the foundational concepts of graph theoretical estimation and generation of small-world networks. We take stock of some of the key developments in the field in the past decade and we consider in some detail the implications of recent studies using high-resolution tract-tracing methods to map the anatomical networks of the macaque and the mouse. In doing so, we draw attention to the important methodological distinction between topological analysis of binary or unweighted graphs, which have provided a popular but simple approach to brain network analysis in the past, and the topology of weighted graphs, which retain more biologically relevant information and are more appropriate to the increasingly sophisticated data on brain connectivity emerging from contemporary tract-tracing and other imaging studies. We conclude by highlighting some possible future trends in the further development of weighted small-worldness as part of a deeper and broader understanding of the topology and the functional value of the strong and weak links between areas of mammalian cortex. PMID:27655008
Holographic hierarchy in the Gaussian matrix model via the fuzzy sphere
NASA Astrophysics Data System (ADS)
Garner, David; Ramgoolam, Sanjaye
2013-10-01
The Gaussian Hermitian matrix model was recently proposed to have a dual string description with worldsheets mapping to a sphere target space. The correlators were written as sums over holomorphic (Belyi) maps from worldsheets to the two-dimensional sphere, branched over three points. We express the matrix model correlators by using the fuzzy sphere construction of matrix algebras, which can be interpreted as a string field theory description of the Belyi strings. This gives the correlators in terms of trivalent ribbon graphs that represent the couplings of irreducible representations of su(2), which can be evaluated in terms of 3j and 6j symbols. The Gaussian model perturbed by a cubic potential is then recognised as a generating function for Ponzano-Regge partition functions for 3-manifolds having the worldsheet as boundary, and equipped with boundary data determined by the ribbon graphs. This can be viewed as a holographic extension of the Belyi string worldsheets to membrane worldvolumes, forming part of a holographic hierarchy linking, via the large N expansion, the zero-dimensional QFT of the Matrix model to 2D strings and 3D membranes. Note that if, after removing the white vertices, the graph contains a blue edge connecting to the same black vertex at both ends, then the triangulation generated from the black edges will contain faces that resemble cut discs. These faces are triangles with two of the edges identified.
Can We Recognize an Innovation? Perspective from an Evolving Network Model
NASA Astrophysics Data System (ADS)
Jain, Sanjay; Krishna, Sandeep
"Innovations" are central to the evolution of societies and the evolution of life. But what constitutes an innovation? We can often agree after the event, when its consequences and impact over a long term are known, whether something was an innovation, and whether it was a "big" innovation or a "minor" one. But can we recognize an innovation "on the fly" as it appears? Successful entrepreneurs often can. Is it possible to formalize that intuition? We discuss this question in the setting of a mathematical model of evolving networks. The model exhibits self-organization , growth, stasis, and collapse of a complex system with many interacting components, reminiscent of real-world phenomena. A notion of "innovation" is formulated in terms of graph-theoretic constructs and other dynamical variables of the model. A new node in the graph gives rise to an innovation, provided it links up "appropriately" with existing nodes; in this view innovation necessarily depends upon the existing context. We show that innovations, as defined by us, play a major role in the birth, growth, and destruction of organizational structures. Furthermore, innovations can be categorized in terms of their graph-theoretic structure as they appear. Different structural classes of innovation have potentially different qualitative consequences for the future evolution of the system, some minor and some major. Possible general lessons from this specific model are briefly discussed.
NASA Astrophysics Data System (ADS)
Viseur, Sophie; Chiaberge, Christophe; Rhomer, Jérémy; Audigane, Pascal
2015-04-01
Fluvial systems generate highly heterogeneous reservoir. These heterogeneities have major impact on fluid flow behaviors. However, the modelling of such reservoirs is mainly performed in under-constrained contexts as they include complex features, though only sparse and indirect data are available. Stochastic modeling is the common strategy to solve such problems. Multiple 3D models are generated from the available subsurface dataset. The generated models represent a sampling of plausible subsurface structure representations. From this model sampling, statistical analysis on targeted parameters (e.g.: reserve estimations, flow behaviors, etc.) and a posteriori uncertainties are performed to assess risks. However, on one hand, uncertainties may be huge, which requires many models to be generated for scanning the space of possibilities. On the other hand, some computations performed on the generated models are time consuming and cannot, in practice, be applied on all of them. This issue is particularly critical in: 1) geological modeling from outcrop data only, as these data types are generally sparse and mainly distributed in 2D at large scale but they may locally include high-resolution descriptions (e.g.: facies, strata local variability, etc.); 2) CO2 storage studies as many scales of investigations are required, from meter to regional ones, to estimate storage capacities and associated risks. Recent approaches propose to define distances between models to allow sophisticated multivariate statistics to be applied on the space of uncertainties so that only sub-samples, representative of initial set, are investigated for dynamic time-consuming studies. This work focuses on defining distances between models that characterize the topology of the reservoir rock network, i.e. its compactness or connectivity degree. The proposed strategy relies on the study of the reservoir rock skeleton. The skeleton of an object corresponds to its median feature. A skeleton is computed for each reservoir rock geobody and studied through a graph spectral analysis. To achieve this, the skeleton is converted into a graph structure. The spectral analysis applied on this graph structure allows a distance to be defined between pairs of graphs. Therefore, this distance is used as support for clustering analysis to gather models that share the same reservoir rock topology. To show the ability of the defined distances to discriminate different types of reservoir connectivity, a synthetic data set of fluvial models with different geological settings was generated and studied using the proposed approach. The results of the clustering analysis are shown and discussed.
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
NASA Astrophysics Data System (ADS)
Szabó, György; Fáth, Gábor
2007-07-01
Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first four sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fifth section surveys the topological complications implied by non-mean-field-type social network structures in general. The next three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.
Towards the map of quantum gravity
NASA Astrophysics Data System (ADS)
Mielczarek, Jakub; Trześniewski, Tomasz
2018-06-01
In this paper we point out some possible links between different approaches to quantum gravity and theories of the Planck scale physics. In particular, connections between loop quantum gravity, causal dynamical triangulations, Hořava-Lifshitz gravity, asymptotic safety scenario, Quantum Graphity, deformations of relativistic symmetries and nonlinear phase space models are discussed. The main focus is on quantum deformations of the Hypersurface Deformations Algebra and Poincaré algebra, nonlinear structure of phase space, the running dimension of spacetime and nontrivial phase diagram of quantum gravity. We present an attempt to arrange the observed relations in the form of a graph, highlighting different aspects of quantum gravity. The analysis is performed in the spirit of a mind map, which represents the architectural approach to the studied theory, being a natural way to describe the properties of a complex system. We hope that the constructed graphs (maps) will turn out to be helpful in uncovering the global picture of quantum gravity as a particular complex system and serve as a useful guide for the researchers.
A model for the emergence of cooperation, interdependence, and structure in evolving networks.
Jain, S; Krishna, S
2001-01-16
Evolution produces complex and structured networks of interacting components in chemical, biological, and social systems. We describe a simple mathematical model for the evolution of an idealized chemical system to study how a network of cooperative molecular species arises and evolves to become more complex and structured. The network is modeled by a directed weighted graph whose positive and negative links represent "catalytic" and "inhibitory" interactions among the molecular species, and which evolves as the least populated species (typically those that go extinct) are replaced by new ones. A small autocatalytic set, appearing by chance, provides the seed for the spontaneous growth of connectivity and cooperation in the graph. A highly structured chemical organization arises inevitably as the autocatalytic set enlarges and percolates through the network in a short analytically determined timescale. This self organization does not require the presence of self-replicating species. The network also exhibits catastrophes over long timescales triggered by the chance elimination of "keystone" species, followed by recoveries.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Chinese Mainland Movie Network
NASA Astrophysics Data System (ADS)
Liu, Ai-Fen; Xue, Yu-Hua; He, Da-Ren
2008-03-01
We propose describing a large kind of cooperation-competition networks by bipartite graphs and their unipartite projections. In the graphs the topological structure describe the cooperation-competition configuration of the basic elements, and the vertex weight describe their different roles in cooperation or results of competition. This complex network description may be helpful for finding and understanding common properties of cooperation-competition systems. In order to show an example, we performed an empirical investigation on the movie cooperation-competition network within recent 80 years in the Chinese mainland. In the net the movies are defined as nodes, and two nodes are connected by a link if a common main movie actor performs in them. The edge represents the competition relationship between two movies for more audience among a special audience colony. We obtained the statistical properties, such as the degree distribution, act degree distribution, act size distribution, and distribution of the total node weight, and explored the influence factors of Chinese mainland movie competition intensity.
NASA Astrophysics Data System (ADS)
Arévalo, Germán. V.; Hincapié, Roberto C.; Sierra, Javier E.
2015-09-01
UDWDM PON is a leading technology oriented to provide ultra-high bandwidth to final users while profiting the physical channels' capability. One of the main drawbacks of UDWDM technique is the fact that the nonlinear effects, like FWM, become stronger due to the close spectral proximity among channels. This work proposes a model for the optimal deployment of this type of networks taking into account the fiber length limitations imposed by physical restrictions related with the fiber's data transmission as well as the users' asymmetric distribution in a provided region. The proposed model employs the data transmission related effects in UDWDM PON as restrictions in the optimization problem and also considers the user's asymmetric clustering and the subdivision of the users region though a Voronoi geometric partition technique. Here it is considered de Voronoi dual graph, it is the Delaunay Triangulation, as the planar graph for resolving the problem related with the minimum weight of the fiber links.
NASA Astrophysics Data System (ADS)
Criado, Regino; García, Esther; Pedroche, Francisco; Romance, Miguel
2013-12-01
In this paper, we show a new technique to analyze families of rankings. In particular, we focus on sports rankings and, more precisely, on soccer leagues. We consider that two teams compete when they change their relative positions in consecutive rankings. This allows to define a graph by linking teams that compete. We show how to use some structural properties of this competitivity graph to measure to what extend the teams in a league compete. These structural properties are the mean degree, the mean strength, and the clustering coefficient. We give a generalization of the Kendall's correlation coefficient to more than two rankings. We also show how to make a dynamic analysis of a league and how to compare different leagues. We apply this technique to analyze the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga, and Premier League. We compare our results with the classical analysis of sport ranking based on measures of competitive balance.
NASA Astrophysics Data System (ADS)
Lin, Po-Chuan; Chen, Bo-Wei; Chang, Hangbae
2016-07-01
This study presents a human-centric technique for social video expansion based on semantic processing and graph analysis. The objective is to increase metadata of an online video and to explore related information, thereby facilitating user browsing activities. To analyze the semantic meaning of a video, shots and scenes are firstly extracted from the video on the server side. Subsequently, this study uses annotations along with ConceptNet to establish the underlying framework. Detailed metadata, including visual objects and audio events among the predefined categories, are indexed by using the proposed method. Furthermore, relevant online media associated with each category are also analyzed to enrich the existing content. With the above-mentioned information, users can easily browse and search the content according to the link analysis and its complementary knowledge. Experiments on a video dataset are conducted for evaluation. The results show that our system can achieve satisfactory performance, thereby demonstrating the feasibility of the proposed idea.
A model for the emergence of cooperation, interdependence, and structure in evolving networks
NASA Astrophysics Data System (ADS)
Jain, Sanjay; Krishna, Sandeep
2001-01-01
Evolution produces complex and structured networks of interacting components in chemical, biological, and social systems. We describe a simple mathematical model for the evolution of an idealized chemical system to study how a network of cooperative molecular species arises and evolves to become more complex and structured. The network is modeled by a directed weighted graph whose positive and negative links represent "catalytic" and "inhibitory" interactions among the molecular species, and which evolves as the least populated species (typically those that go extinct) are replaced by new ones. A small autocatalytic set, appearing by chance, provides the seed for the spontaneous growth of connectivity and cooperation in the graph. A highly structured chemical organization arises inevitably as the autocatalytic set enlarges and percolates through the network in a short analytically determined timescale. This self organization does not require the presence of self-replicating species. The network also exhibits catastrophes over long timescales triggered by the chance elimination of "keystone" species, followed by recoveries.
EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Qingya; Guo, Hanqi; Che, Limei
We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based onmore » ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.« less
Google matrix of business process management
NASA Astrophysics Data System (ADS)
Abel, M. W.; Shepelyansky, D. L.
2011-12-01
Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by PageRank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.
Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B
2014-01-01
Although much attention has recently been focused on single-subject functional networks, using methods such as resting-state functional MRI, methods for constructing single-subject structural networks are in their infancy. Single-subject cortical networks aim to describe the self-similarity across the cortical structure, possibly signifying convergent developmental pathways. Previous methods for constructing single-subject cortical networks have used patch-based correlations and distance metrics based on curvature and thickness. We present here a method for constructing similarity-based cortical structural networks that utilizes a rotation-invariant representation of structure. The resulting graph metrics are closely linked to age and indicate an increasing degree of closeness throughout development in nearly all brain regions, perhaps corresponding to a more regular structure as the brain matures. The derived graph metrics demonstrate a four-fold increase in power for detecting age as compared to cortical thickness. This proof of concept study indicates that the proposed metric may be useful in identifying biologically relevant cortical patterns.
Loprinzi, Paul D; Edwards, Meghan
2015-09-01
Emerging work suggests an inverse association between physical activity and erectile dysfunction (ED). The majority of this cross-sectional research comes from convenience samples and all studies on this topic have employed self-report physical activity methodology. Therefore, the purpose of this brief-report, confirmatory research study was to examine the association between objectively measured physical activity and ED in a national sample of Americans. Data from the 2003-2004 National Health and Nutrition Examination Survey were used. Six hundred ninety-two adults between the ages of 50 and 85 years (representing 33.2 million adults) constituted the analytic sample. Participants wore an ActiGraph 7164 accelerometer (ActiGraph, Pensacola, FL, USA) for up to 7 days with ED assessed via self-report. The main outcome measure used was ED assessed via self-report. After adjustments, for every 30 min/day increase in moderate-to-vigorous physical activity, participants had a 43% reduced odds of having ED (odds ratioadjusted = 0.57; 95% confidence interval: 0.40-0.81; P = 0.004). This confirmatory study employing an objective measure of physical activity in a national sample suggests an inverse association between physical activity and ED. © 2015 International Society for Sexual Medicine.
ERIC Educational Resources Information Center
Yoder, Sharon K.
This book discusses four kinds of graphs that are taught in mathematics at the middle school level: pictographs, bar graphs, line graphs, and circle graphs. The chapters on each of these types of graphs contain information such as starting, scaling, drawing, labeling, and finishing the graphs using "LogoWriter." The final chapter of the…
NASA Technical Reports Server (NTRS)
Sung, C.-M.; Singer, R. B.; Parkin, K. M.; Burns, R. G.; Osborne, M.
1977-01-01
Results are reported of Fe(++) crystal field spectral measurements for olivines and pyroxenes up to 400 C. The results are correlated with crystal structure data at elevated temperatures, and the validity of remote-sensed identifications of minerals on hot surfaces of the moon and Mercury is assessed. Two techniques were used to obtain spectra of minerals at elevated temperatures using a spectrophotometer. One employed a diamond cell assembly or a specially designed sample holder to measure polarized absorption spectra of heated single crystals. For the other technique, a sample holder was designed to attach to a diffuse reflectance accessory to produce reflectance spectra of heated powdered samples. Polarized absorption spectra of forsterite at 20-400 C are shown in a graph. Other graphs show the temperature dependence of Fe(++) crystal field bands in olivines, the diffuse reflectance spectra of olivine at 40-400 C, the polarization absorption spectra of orthopyroxene at 30-400 C, the diffuse reflectance spectra of pigeonite at 40-400 C, and unpolarized absorption spectra of lunar pyroxene from Apollo 15 rock 15058.
Object segmentation using graph cuts and active contours in a pyramidal framework
NASA Astrophysics Data System (ADS)
Subudhi, Priyambada; Mukhopadhyay, Susanta
2018-03-01
Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.
Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.
Peng, Yong; Lu, Bao-Liang; Wang, Suhang
2015-05-01
Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches. Copyright © 2015 Elsevier Ltd. All rights reserved.
An algorithm for finding a similar subgraph of all Hamiltonian cycles
NASA Astrophysics Data System (ADS)
Wafdan, R.; Ihsan, M.; Suhaimi, D.
2018-01-01
This paper discusses an algorithm to find a similar subgraph called findSimSubG algorithm. A similar subgraph is a subgraph with a maximum number of edges, contains no isolated vertex and is contained in every Hamiltonian cycle of a Hamiltonian Graph. The algorithm runs only on Hamiltonian graphs with at least two Hamiltonian cycles. The algorithm works by examining whether the initial subgraph of the first Hamiltonian cycle is a subgraph of comparison graphs. If the initial subgraph is not in comparison graphs, the algorithm will remove edges and vertices of the initial subgraph that are not in comparison graphs. There are two main processes in the algorithm, changing Hamiltonian cycle into a cycle graph and removing edges and vertices of the initial subgraph that are not in comparison graphs. The findSimSubG algorithm can find the similar subgraph without using backtracking method. The similar subgraph cannot be found on certain graphs, such as an n-antiprism graph, complete bipartite graph, complete graph, 2n-crossed prism graph, n-crown graph, n-möbius ladder, prism graph, and wheel graph. The complexity of this algorithm is O(m|V|), where m is the number of Hamiltonian cycles and |V| is the number of vertices of a Hamiltonian graph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
WESTRICH, HENRY; WILSON, ANDREW; STANTON, ERIC
LDRDView is a software tool for visualizing a collection of textual records and exploring relationships between them for the purpose of gaining new insights about the submitted information. By evaluating the content of the records and assigning coordinates to each based on its similarity to others, LDRDView graphically displays a corpus of records either as a landscape of hills and valleys or as a graph of nodes and links. A suite of data analysis tools facilitates in-depth exploration of the corpus as a whole and the content of each individual record.
NASA Astrophysics Data System (ADS)
Fessel, Adrian; Oettmeier, Christina; Bernitt, Erik; Gauthier, Nils C.; Döbereiner, Hans-Günther
2012-08-01
We study the formation of transportation networks of the true slime mold Physarum polycephalum after fragmentation by shear. Small fragments, called microplasmodia, fuse to form macroplasmodia in a percolation transition. At this topological phase transition, one single giant component forms, connecting most of the previously isolated microplasmodia. Employing the configuration model of graph theory for small link degree, we have found analytically an exact solution for the phase transition. It is generally applicable to percolation as seen, e.g., in vascular networks.
Mathematical foundations of the GraphBLAS
Kepner, Jeremy; Aaltonen, Peter; Bader, David; ...
2016-12-01
The GraphBLAS standard (GraphBlas.org) is being developed to bring the potential of matrix-based graph algorithms to the broadest possible audience. Mathematically, the GraphBLAS defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This study provides an introduction to the mathematics of the GraphBLAS. Graphs represent connections between vertices with edges. Matrices can represent a wide range of graphs using adjacency matrices or incidence matrices. Adjacency matrices are often easier to analyze while incidence matrices are often better for representing data. Fortunately, themore » two are easily connected by matrix multiplication. A key feature of matrix mathematics is that a very small number of matrix operations can be used to manipulate a very wide range of graphs. This composability of a small number of operations is the foundation of the GraphBLAS. A standard such as the GraphBLAS can only be effective if it has low performance overhead. Finally, performance measurements of prototype GraphBLAS implementations indicate that the overhead is low.« less
2012-01-01
Background Exposure to environmental tobacco smoke (ETS) leads to higher rates of pulmonary diseases and infections in children. To study the biochemical changes that may precede lung diseases, metabolomic effects on fetal and maternal lungs and plasma from rats exposed to ETS were compared to filtered air control animals. Genome- reconstructed metabolic pathways may be used to map and interpret dysregulation in metabolic networks. However, mass spectrometry-based non-targeted metabolomics datasets often comprise many metabolites for which links to enzymatic reactions have not yet been reported. Hence, network visualizations that rely on current biochemical databases are incomplete and also fail to visualize novel, structurally unidentified metabolites. Results We present a novel approach to integrate biochemical pathway and chemical relationships to map all detected metabolites in network graphs (MetaMapp) using KEGG reactant pair database, Tanimoto chemical and NIST mass spectral similarity scores. In fetal and maternal lungs, and in maternal blood plasma from pregnant rats exposed to environmental tobacco smoke (ETS), 459 unique metabolites comprising 179 structurally identified compounds were detected by gas chromatography time of flight mass spectrometry (GC-TOF MS) and BinBase data processing. MetaMapp graphs in Cytoscape showed much clearer metabolic modularity and complete content visualization compared to conventional biochemical mapping approaches. Cytoscape visualization of differential statistics results using these graphs showed that overall, fetal lung metabolism was more impaired than lungs and blood metabolism in dams. Fetuses from ETS-exposed dams expressed lower lipid and nucleotide levels and higher amounts of energy metabolism intermediates than control animals, indicating lower biosynthetic rates of metabolites for cell division, structural proteins and lipids that are critical for in lung development. Conclusions MetaMapp graphs efficiently visualizes mass spectrometry based metabolomics datasets as network graphs in Cytoscape, and highlights metabolic alterations that can be associated with higher rate of pulmonary diseases and infections in children prenatally exposed to ETS. The MetaMapp scripts can be accessed at http://metamapp.fiehnlab.ucdavis.edu. PMID:22591066
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic. The information carried by CGR contact plan messages is useful not only for dynamic route computation, but also for the implementation of rate control, congestion forecasting, transmission episode initiation and termination, timeout interval computation, and retransmission timer suspension and resumption.
Linked Metadata - lightweight semantics for data integration (Invited)
NASA Astrophysics Data System (ADS)
Hendler, J. A.
2013-12-01
The "Linked Open Data" cloud (http://linkeddata.org) is currently used to show how the linking of datasets, supported by SPARQL endpoints, is creating a growing set of linked data assets. This linked data space has been growing rapidly, and the last version collected is estimated to have had over 35 billion 'triples.' As impressive as this may sound, there is an inherent flaw in the way the linked data story is conceived. The idea is that all of the data is represented in a linked format (generally RDF) and applications will essentially query this cloud and provide mashup capabilities between the various kinds of data that are found. The view of linking in the cloud is fairly simple -links are provided by either shared URIs or by URIs that are asserted to be owl:sameAs. This view of the linking, which primarily focuses on shared objects and subjects in RDF's subject-predicate-object representation, misses a critical aspect of Semantic Web technology. Given triples such as * A:person1 foaf:knows A:person2 * B:person3 foaf:knows B:person4 * C:person5 foaf:name 'John Doe' this view would not consider them linked (barring other assertions) even though they share a common vocabulary. In fact, we get significant clues that there are commonalities in these data items from the shared namespaces and predicates, even if the traditional 'graph' view of RDF doesn't appear to join on these. Thus, it is the linking of the data descriptions, whether as metadata or other vocabularies, that provides the linking in these cases. This observation is crucial to scientific data integration where the size of the datasets, or even the individual relationships within them, can be quite large. (Note that this is not restricted to scientific data - search engines, social networks, and massive multiuser games also create huge amounts of data.) To convert all the triples into RDF and provide individual links is often unnecessary, and is both time and space intensive. Those looking to do on the fly integration may prefer to do more traditional data queries and then convert and link the 'views' returned at retrieval time, providing another means of using the linked data infrastructure without having to convert whole datasets to triples to provide linking. Web companies have been taking advantage of 'lightweight' semantic metadata for search quality and optimization (cf. schema.org), linking networks within and without web sites (cf. Facebook's Open Graph Protocol), and in doing various kinds of advertisement and user modeling across datasets. Scientific metadata, on the other hand, has traditionally been geared at being largescale and highly descriptive, and scientific ontologies have been aimed at high expressivity, essentially providing complex reasoning services rather than the less expressive vocabularies needed for data discovery and simple mappings that can allow humans (or more complex systems) when full scale integration is needed. Although this work is just the beginning for providing integration, as the community creates more and more datasets, discovery of these data resources on the Web becomes a crucial starting place. Simple descriptors, that can be combined with textual fields and/or common community vocabularies, can be a great starting place on bringing scientific data into the Web of Data that is growing in other communities. References: [1] Pouchard, Line C., et al. "A Linked Science investigation: enhancing climate change data discovery with semantic technologies." Earth science informatics 6.3 (2013): 175-185.
ERIC Educational Resources Information Center
Schwartz, Richard
1992-01-01
Suggests that teachers use mathematics problems related to the "1992 World Population Data Sheet" to teach students about such population-related issues as hunger, resource scarcity, poverty, and pollution. Offers sample problems involving percents, ratios, basic calculations, sequences, variability, graphs, averages, and correlation. Includes a…
What Mathematical Competencies Are Needed for Success in College.
ERIC Educational Resources Information Center
Garofalo, Joe
1990-01-01
Identifies requisite math skills for a microeconomics course, offering samples of supply curves, demand curves, equilibrium prices, elasticity, and complex graph problems. Recommends developmental mathematics competencies, including problem solving, reasoning, connections, communication, number and operation sense, algebra, relationships,…
1990-01-09
data structures can easily be presented to the user interface. An emphasis of the Graph Browser was the realization of graph views and graph animation ... animation of the graph. Anima- tion of the graph includes changing node shapes, changing node and arc colors, changing node and arc text, and making...many graphs tend to be tree-like. Animtion of a graph is a useful feature. One of the primary goals of GMB was to support animated graphs. For animation
ERIC Educational Resources Information Center
Phage, Itumeleng B.; Lemmer, Miriam; Hitge, Mariette
2017-01-01
Students' graph comprehension may be affected by the background of the students who are the readers or interpreters of the graph, their knowledge of the context in which the graph is set, and the inferential processes required by the graph operation. This research study investigated these aspects of graph comprehension for 152 first year…
NASA Astrophysics Data System (ADS)
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Llamas-Covarrubias, Mara Anaís; Valle, Yeminia; Navarro-Hernández, Rosa Elena; Guzmán-Guzmán, Iris Paola; Ramírez-Dueñas, María Guadalupe; Rangel-Villalobos, Héctor; Estrada-Chávez, Ciro; Muñoz-Valle, José Francisco
2012-08-01
Rheumatoid arthritis (RA) is an inflammatory autoimmune disease of unknown etiology. Many cytokines have been found to be associated with RA pathogenesis and among them is macrophage migration inhibitory factor (MIF). The aim of this study was to determine whether MIF serum levels are associated with RA course, clinical activity, and clinical biomarkers of the disease. MIF levels were determined in serum samples of 54 RA patients and 78 healthy subjects (HS) by enzyme-linked immunosorbent assay (ELISA). Disease activity was evaluated using the DAS28 score. Patients were subgrouped according to disease activity and years of evolution of disease. Statistical analysis was carried out by SPSS 10.0 and GraphPad Prism 5 software. RA patients presented increased levels of MIF as compared to HS. MIF levels were raised on early stages of RA and tend to decrease according to years of evolution. Moreover, MIF levels positively correlated with rheumatoid factor in RA patients and with C reactive protein in all individuals studied. Our findings suggest that MIF plays a role in early stages of RA.
Comparison and Enumeration of Chemical Graphs
Akutsu, Tatsuya; Nagamochi, Hiroshi
2013-01-01
Chemical compounds are usually represented as graph structured data in computers. In this review article, we overview several graph classes relevant to chemical compounds and the computational complexities of several fundamental problems for these graph classes. In particular, we consider the following problems: determining whether two chemical graphs are identical, determining whether one input chemical graph is a part of the other input chemical graph, finding a maximum common part of two input graphs, finding a reaction atom mapping, enumerating possible chemical graphs, and enumerating stereoisomers. We also discuss the relationship between the fifth problem and kernel functions for chemical compounds. PMID:24688697
Mean square cordial labelling related to some acyclic graphs and its rough approximations
NASA Astrophysics Data System (ADS)
Dhanalakshmi, S.; Parvathi, N.
2018-04-01
In this paper we investigate that the path Pn, comb graph Pn⊙K1, n-centipede graph,centipede graph (n,2) and star Sn admits mean square cordial labeling. Also we proved that the induced sub graph obtained by the upper approximation of any sub graph H of the above acyclic graphs admits mean square cordial labeling.
Relating zeta functions of discrete and quantum graphs
NASA Astrophysics Data System (ADS)
Harrison, Jonathan; Weyand, Tracy
2018-02-01
We write the spectral zeta function of the Laplace operator on an equilateral metric graph in terms of the spectral zeta function of the normalized Laplace operator on the corresponding discrete graph. To do this, we apply a relation between the spectrum of the Laplacian on a discrete graph and that of the Laplacian on an equilateral metric graph. As a by-product, we determine how the multiplicity of eigenvalues of the quantum graph, that are also in the spectrum of the graph with Dirichlet conditions at the vertices, depends on the graph geometry. Finally we apply the result to calculate the vacuum energy and spectral determinant of a complete bipartite graph and compare our results with those for a star graph, a graph in which all vertices are connected to a central vertex by a single edge.
Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng
2015-12-01
Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
A general method for computing Tutte polynomials of self-similar graphs
NASA Astrophysics Data System (ADS)
Gong, Helin; Jin, Xian'an
2017-10-01
Self-similar graphs were widely studied in both combinatorics and statistical physics. Motivated by the construction of the well-known 3-dimensional Sierpiński gasket graphs, in this paper we introduce a family of recursively constructed self-similar graphs whose inner duals are of the self-similar property. By combining the dual property of the Tutte polynomial and the subgraph-decomposition trick, we show that the Tutte polynomial of this family of graphs can be computed in an iterative way and in particular the exact expression of the formula of the number of their spanning trees is derived. Furthermore, we show our method is a general one that is easily extended to compute Tutte polynomials for other families of self-similar graphs such as Farey graphs, 2-dimensional Sierpiński gasket graphs, Hanoi graphs, modified Koch graphs, Apollonian graphs, pseudofractal scale-free web, fractal scale-free network, etc.
Bipartite separability and nonlocal quantum operations on graphs
NASA Astrophysics Data System (ADS)
Dutta, Supriyo; Adhikari, Bibhas; Banerjee, Subhashish; Srikanth, R.
2016-07-01
In this paper we consider the separability problem for bipartite quantum states arising from graphs. Earlier it was proved that the degree criterion is the graph-theoretic counterpart of the familiar positive partial transpose criterion for separability, although there are entangled states with positive partial transpose for which the degree criterion fails. Here we introduce the concept of partially symmetric graphs and degree symmetric graphs by using the well-known concept of partial transposition of a graph and degree criteria, respectively. Thus, we provide classes of bipartite separable states of dimension m ×n arising from partially symmetric graphs. We identify partially asymmetric graphs that lack the property of partial symmetry. We develop a combinatorial procedure to create a partially asymmetric graph from a given partially symmetric graph. We show that this combinatorial operation can act as an entanglement generator for mixed states arising from partially symmetric graphs.
On the local edge antimagicness of m-splitting graphs
NASA Astrophysics Data System (ADS)
Albirri, E. R.; Dafik; Slamin; Agustin, I. H.; Alfarisi, R.
2018-04-01
Let G be a connected and simple graph. A split graph is a graph derived by adding new vertex v‧ in every vertex v‧ such that v‧ adjacent to v in graph G. An m-splitting graph is a graph which has m v‧-vertices, denoted by mSpl(G). A local edge antimagic coloring in G = (V, E) graph is a bijection f:V (G)\\to \\{1,2,3,\\ldots,|V(G)|\\} in which for any two adjacent edges e 1 and e 2 satisfies w({e}1)\
Survey of Approaches to Generate Realistic Synthetic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Powers, Sarah S
A graph is a flexible data structure that can represent relationships between entities. As with other data analysis tasks, the use of realistic graphs is critical to obtaining valid research results. Unfortunately, using the actual ("real-world") graphs for research and new algorithm development is difficult due to the presence of sensitive information in the data or due to the scale of data. This results in practitioners developing algorithms and systems that employ synthetic graphs instead of real-world graphs. Generating realistic synthetic graphs that provide reliable statistical confidence to algorithmic analysis and system evaluation involves addressing technical hurdles in a broadmore » set of areas. This report surveys the state of the art in approaches to generate realistic graphs that are derived from fitted graph models on real-world graphs.« less
Apparatuses and Methods for Producing Runtime Architectures of Computer Program Modules
NASA Technical Reports Server (NTRS)
Abi-Antoun, Marwan Elia (Inventor); Aldrich, Jonathan Erik (Inventor)
2013-01-01
Apparatuses and methods for producing run-time architectures of computer program modules. One embodiment includes creating an abstract graph from the computer program module and from containment information corresponding to the computer program module, wherein the abstract graph has nodes including types and objects, and wherein the abstract graph relates an object to a type, and wherein for a specific object the abstract graph relates the specific object to a type containing the specific object; and creating a runtime graph from the abstract graph, wherein the runtime graph is a representation of the true runtime object graph, wherein the runtime graph represents containment information such that, for a specific object, the runtime graph relates the specific object to another object that contains the specific object.
A graph decomposition-based approach for water distribution network optimization
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.
2013-04-01
A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.
Vehicle Technologies Fact of the Week 2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Stacy C.; Diegel, Susan W.; Moore, Sheila A.
Each week the U.S. Department of Energy s Vehicle Technology Office (VTO) posts a Fact of the Week on their website: http://www1.eere.energy.gov/vehiclesandfuels/ . These Facts provide statistical information, usually in the form of charts and tables, on vehicle sales, fuel economy, gasoline prices, and other transportation-related trends. Each Fact is a stand-alone page that includes a graph, text explaining the significance of the data, the supporting information on which the graph was based, and the source of the data. A link to the current week s Fact is available on the VTO homepage, but older Facts (back to 2009) aremore » archived and still available at: http://energy.gov/eere/vehicles/current-and-past-years-facts-week. Each Fact of the Week website page includes a link to an Excel file. That file contains the data from the Supporting Information section of the page so that researchers can easily use data from the Fact of the Week in their work. Beginning in August of 2015, a subscription list is available on the DOE website so that those interested can sign up for an email to be sent each Monday which includes the text and graphic from the current week s Fact. This report is a compilation of the Facts that were posted during calendar year 2015. The Facts were created, written and prepared by staff in Oak Ridge National Laboratory's Center for Transportation Analysis.« less
Knowledge Representation Standards and Interchange Formats for Causal Graphs
NASA Technical Reports Server (NTRS)
Throop, David R.; Malin, Jane T.; Fleming, Land
2005-01-01
In many domains, automated reasoning tools must represent graphs of causally linked events. These include fault-tree analysis, probabilistic risk assessment (PRA), planning, procedures, medical reasoning about disease progression, and functional architectures. Each of these fields has its own requirements for the representation of causation, events, actors and conditions. The representations include ontologies of function and cause, data dictionaries for causal dependency, failure and hazard, and interchange formats between some existing tools. In none of the domains has a generally accepted interchange format emerged. The paper makes progress towards interoperability across the wide range of causal analysis methodologies. We survey existing practice and emerging interchange formats in each of these fields. Setting forth a set of terms and concepts that are broadly shared across the domains, we examine the several ways in which current practice represents them. Some phenomena are difficult to represent or to analyze in several domains. These include mode transitions, reachability analysis, positive and negative feedback loops, conditions correlated but not causally linked and bimodal probability distributions. We work through examples and contrast the differing methods for addressing them. We detail recent work in knowledge interchange formats for causal trees in aerospace analysis applications in early design, safety and reliability. Several examples are discussed, with a particular focus on reachability analysis and mode transitions. We generalize the aerospace analysis work across the several other domains. We also recommend features and capabilities for the next generation of causal knowledge representation standards.
NASA Astrophysics Data System (ADS)
McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.
2012-06-01
Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.
Community Structure of a Bank-Firm Credit Network in Japan
NASA Astrophysics Data System (ADS)
Iyetomi, Hiroshi; Matsuura, Yuki
2014-03-01
We study temporal change of community structure in a Japanese credit network formed by banks and listed firms through their financial relations over the last 30 years. The credit connectedness is regarded as a potenital source of systemic risk. Our network is a bipartite graph consisting of two species of nodes connected with bidirectional links. The direction of links is identified with that of risk flows and their weights are relative credit/loan with respect to the targets. In a partial credit network obtained only with the links pointing from firms toward banks, the city banks forms one major community in most of the time period to share risk when firms go wrong. On the other hand, a partial network only with the links from banks toward firms is decomposed into communities of similar size each of which has its own city bank, reflecting the main-bank system in Japan. Finally we take overlapping parts of the two community sets to find cores of the risk concentration in the credit network. This work was supported by JSPS KAKENHI Grant Number 22300080.
Coevolution of game and network structure with adjustable linking
NASA Astrophysics Data System (ADS)
Qin, Shao-Meng; Zhang, Guo-Yong; Chen, Yong
2009-12-01
Most papers about the evolutionary game on graph assume the statistic network structure. However, in the real world, social interaction could change the relationship among people. And the change of social structure will also affect people’s strategies. We build a coevolution model of prisoner’s dilemma game and network structure to study the dynamic interaction in the real world. Differing from other coevolution models, players rewire their network connections according to the density of cooperation and other players’ payoffs. We use a parameter α to control the effect of payoff in the process of rewiring. Based on the asynchronous update rule and Monte Carlo simulation, we find that, when players prefer to rewire their links to those who are richer, the temptation can increase the cooperation density.
Supercooperation in evolutionary games on correlated weighted networks.
Buesser, Pierre; Tomassini, Marco
2012-01-01
In this work we study the behavior of classical two-person, two-strategies evolutionary games on a class of weighted networks derived from Barabási-Albert and random scale-free unweighted graphs. Using customary imitative dynamics, our numerical simulation results show that the presence of link weights that are correlated in a particular manner with the degree of the link end points leads to unprecedented levels of cooperation in the whole games' phase space, well above those found for the corresponding unweighted complex networks. We provide intuitive explanations for this favorable behavior by transforming the weighted networks into unweighted ones with particular topological properties. The resulting structures help us to understand why cooperation can thrive and also give ideas as to how such supercooperative networks might be built.