Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-17
A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also http://www.genostar.org.
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo1, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-01
Background A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. Results GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. Conclusion GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also . PMID:16417636
A Deformable Generic 3D Model of Haptoral Anchor of Monogenean
Teo, Bee Guan; Dhillon, Sarinder Kaur; Lim, Lee Hong Susan
2013-01-01
In this paper, a digital 3D model which allows for visualisation in three dimensions and interactive manipulation is explored as a tool to help us understand the structural morphology and elucidate the functions of morphological structures of fragile microorganisms which defy live studies. We developed a deformable generic 3D model of haptoral anchor of dactylogyridean monogeneans that can subsequently be deformed into different desired anchor shapes by using direct manipulation deformation technique. We used point primitives to construct the rectangular building blocks to develop our deformable 3D model. Point primitives are manually marked on a 2D illustration of an anchor on a Cartesian graph paper and a set of Cartesian coordinates for each point primitive is manually extracted from the graph paper. A Python script is then written in Blender to construct 3D rectangular building blocks based on the Cartesian coordinates. The rectangular building blocks are stacked on top or by the side of each other following their respective Cartesian coordinates of point primitive. More point primitives are added at the sites in the 3D model where more structural variations are likely to occur, in order to generate complex anchor structures. We used Catmull-Clark subdivision surface modifier to smoothen the surface and edge of the generic 3D model to obtain a smoother and more natural 3D shape and antialiasing option to reduce the jagged edges of the 3D model. This deformable generic 3D model can be deformed into different desired 3D anchor shapes through direct manipulation deformation technique by aligning the vertices (pilot points) of the newly developed deformable generic 3D model onto the 2D illustrations of the desired shapes and moving the vertices until the desire 3D shapes are formed. In this generic 3D model all the vertices present are deployed for displacement during deformation. PMID:24204903
A deformable generic 3D model of haptoral anchor of Monogenean.
Teo, Bee Guan; Dhillon, Sarinder Kaur; Lim, Lee Hong Susan
2013-01-01
In this paper, a digital 3D model which allows for visualisation in three dimensions and interactive manipulation is explored as a tool to help us understand the structural morphology and elucidate the functions of morphological structures of fragile microorganisms which defy live studies. We developed a deformable generic 3D model of haptoral anchor of dactylogyridean monogeneans that can subsequently be deformed into different desired anchor shapes by using direct manipulation deformation technique. We used point primitives to construct the rectangular building blocks to develop our deformable 3D model. Point primitives are manually marked on a 2D illustration of an anchor on a Cartesian graph paper and a set of Cartesian coordinates for each point primitive is manually extracted from the graph paper. A Python script is then written in Blender to construct 3D rectangular building blocks based on the Cartesian coordinates. The rectangular building blocks are stacked on top or by the side of each other following their respective Cartesian coordinates of point primitive. More point primitives are added at the sites in the 3D model where more structural variations are likely to occur, in order to generate complex anchor structures. We used Catmull-Clark subdivision surface modifier to smoothen the surface and edge of the generic 3D model to obtain a smoother and more natural 3D shape and antialiasing option to reduce the jagged edges of the 3D model. This deformable generic 3D model can be deformed into different desired 3D anchor shapes through direct manipulation deformation technique by aligning the vertices (pilot points) of the newly developed deformable generic 3D model onto the 2D illustrations of the desired shapes and moving the vertices until the desire 3D shapes are formed. In this generic 3D model all the vertices present are deployed for displacement during deformation.
Scaling Limits and Generic Bounds for Exploration Processes
NASA Astrophysics Data System (ADS)
Bermolen, Paola; Jonckheere, Matthieu; Sanders, Jaron
2017-12-01
We consider exploration algorithms of the random sequential adsorption type both for homogeneous random graphs and random geometric graphs based on spatial Poisson processes. At each step, a vertex of the graph becomes active and its neighboring nodes become blocked. Given an initial number of vertices N growing to infinity, we study statistical properties of the proportion of explored (active or blocked) nodes in time using scaling limits. We obtain exact limits for homogeneous graphs and prove an explicit central limit theorem for the final proportion of active nodes, known as the jamming constant, through a diffusion approximation for the exploration process which can be described as a unidimensional process. We then focus on bounding the trajectories of such exploration processes on random geometric graphs, i.e., random sequential adsorption. As opposed to exploration processes on homogeneous random graphs, these do not allow for such a dimensional reduction. Instead we derive a fundamental relationship between the number of explored nodes and the discovered volume in the spatial process, and we obtain generic bounds for the fluid limit and jamming constant: bounds that are independent of the dimension of space and the detailed shape of the volume associated to the discovered node. Lastly, using coupling techinques, we give trajectorial interpretations of the generic bounds.
Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan
2012-01-01
Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).
MadDM: Computation of dark matter relic abundance
NASA Astrophysics Data System (ADS)
Backović, Mihailo; Kong, Kyoungchul; McCaskey, Mathew
2017-12-01
MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.
Using graph approach for managing connectivity in integrative landscape modelling
NASA Astrophysics Data System (ADS)
Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger
2013-04-01
In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). OpenFLUID-landr library has been developed in order i) to be used with no GIS expert skills needed (common gis formats can be read and simplified spatial management is provided), ii) to easily develop adapted rules of landscape discretization and graph creation to follow spatialized model requirements and iii) to allow model developers to manage dynamic and complex spatial topology. Graph management in OpenFLUID are shown with i) examples of hydrological modelizations on complex farmed landscapes and ii) the new implementation of Geo-MHYDAS tool based on the OpenFLUID-landr library, which allows to discretize a landscape and create graph structure for the MHYDAS model requirements.
BioGraph: unsupervised biomedical knowledge discovery via automated hypothesis generation
2011-01-01
We present BioGraph, a data integration and data mining platform for the exploration and discovery of biomedical information. The platform offers prioritizations of putative disease genes, supported by functional hypotheses. We show that BioGraph can retrospectively confirm recently discovered disease genes and identify potential susceptibility genes, outperforming existing technologies, without requiring prior domain knowledge. Additionally, BioGraph allows for generic biomedical applications beyond gene discovery. BioGraph is accessible at http://www.biograph.be. PMID:21696594
The ergodicity landscape of quantum theories
NASA Astrophysics Data System (ADS)
Ho, Wen Wei; Radičević, Đorđe
2018-02-01
This paper is a physicist’s review of the major conceptual issues concerning the problem of spectral universality in quantum systems. Here, we present a unified, graph-based view of all archetypical models of such universality (billiards, particles in random media, interacting spin or fermion systems). We find phenomenological relations between the onset of ergodicity (Gaussian-random delocalization of eigenstates) and the structure of the appropriate graphs, and we construct a heuristic picture of summing trajectories on graphs that describes why a generic interacting system should be ergodic. We also provide an operator-based discussion of quantum chaos and propose criteria to distinguish bases that can usefully diagnose ergodicity. The result of this analysis is a rough but systematic outline of how ergodicity changes across the space of all theories with a given Hilbert space dimension. As a particular example, we study the SYK model and report on the transition from maximal to partial ergodicity as the disorder strength is decreased.
Generic strategies for chemical space exploration.
Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F
2014-01-01
The chemical universe of molecules reachable from a set of start compounds by iterative application of a finite number of reactions is usually so vast, that sophisticated and efficient exploration strategies are required to cope with the combinatorial complexity. A stringent analysis of (bio)chemical reaction networks, as approximations of these complex chemical spaces, forms the foundation for the understanding of functional relations in Chemistry and Biology. Graphs and graph rewriting are natural models for molecules and reactions. Borrowing the idea of partial evaluation from functional programming, we introduce partial applications of rewrite rules. A framework for the specification of exploration strategies in graph-rewriting systems is presented. Using key examples of complex reaction networks from carbohydrate chemistry we demonstrate the feasibility of this high-level strategy framework. While being designed for chemical applications, the framework can also be used to emulate higher-level transformation models such as illustrated in a small puzzle game.
Dynamic graph cuts for efficient inference in Markov Random Fields.
Kohli, Pushmeet; Torr, Philip H S
2007-12-01
Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.
Discrete Mathematical Approaches to Graph-Based Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.
2014-04-01
Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In thismore » paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.« less
Graph theory and the Virasoro master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obers, N.A.J.
1991-04-01
A brief history of affine Lie algebra, the Virasoro algebra and its culmination in the Virasoro master equations is given. By studying ansaetze of the master equation, we obtain exact solutions and gain insight in the structure of large slices of affine-Virasoro space. We find an isomorphism between the constructions in the ansatz SO(n){sub diag}, which is a set of unitary, generically irrational affine-Virasoro constructions on SO(n), and the unlabelled graphs, while, conversely, a group-theoretic and conformal field-theoretic identification is obtained for every graph of graph theory. We also define a class of magic'' Lie group bases in which themore » Virasoro master equation admits a simple metric ansatz (gmetric), whose structure is visible in the high-level expansion. When a magic basis is real on compact g, the corresponding g{sub metric} is a large system of unitary, generically irrational conformal field theories. Examples in this class include the graph-theory ansatz SO(n){sub diag} in the Cartesian basis of SO(n), and the ansatz SU(n){sub metric} in the Pauli-like basis of SU(n). Finally, we define the sine-area graphs'' of SU(n), which label the conformal field theories of SU(n){sub metric}, and we note that, in similar fashion, each magic basis of g defines a generalized graph theory on g which labels the conformal field theories of g{sub metric}. 24 figs., 4 tabs.« less
Learning topic models by belief propagation.
Zeng, Jia; Cheung, William K; Liu, Jiming
2013-05-01
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.
Information extraction and knowledge graph construction from geoscience literature
NASA Astrophysics Data System (ADS)
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen
2018-03-01
Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.
NASA Technical Reports Server (NTRS)
Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.
1990-01-01
A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.
Graph theory and the Virasoro master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obers, N.A.J.
1991-01-01
A brief history of affine Lie algebra, the Virasoro algebra and its culmination in the Virasoro master equation is given. By studying ansaetze of the master equation, the author obtains exact solutions and gains insight in the structure of large slices of affine-Virasoro space. He finds an isomorphism between the constructions in the ansatz SO(n){sub diag}, which is a set of unitary, generically irrational affine-Virasoro constructions on SO(n), and the unlabeled graphs of order n. On the one hand, the conformal constructions, are classified by the graphs, while, conversely, a group-theoretic and conformal field-theoretic identification is obtained for every graphmore » of graph theory. He also defines a class of magic Lie group bases in which the Virasoro master equation admits a simple metric ansatz {l brace}g{sub metric}{r brace}, whose structure is visible in the high-level expansion. When a magic basis is real on compact g, the corresponding g{sub metric} is a large system of unitary, generically irrational conformal field theories. Examples in this class include the graph-theory ansatz SO(n){sub diag} in the Cartesian basis of SO(n), and the ansatz SU(n){sub metric} in the Pauli-like basis of SU(n). Finally, he defines the sine-area graphs' of SU(n), which label the conformal field theories of SU(n){sub metric}, and he notes that, in similar fashion, each magic basis of g defines a generalized graph theory on g which labels the conformal field theories of g{sub metric}.« less
A Generic Approach for Pen-Based User Interface Development
NASA Astrophysics Data System (ADS)
Macé, Sébastien; Anquetil, Éric
Pen-based interaction is an intuitive way to realize hand drawn structured documents, but few applications take advantage of it. Indeed, the interpretation of the user hand drawn strokes in the context of document is a complex problem. In this paper, we propose a new generic approach to develop such systems based on three independent components. The first one is a set of graphical and editing functions adapted to pen interaction. The second one is a rule-based formalism that models structured document composition and the corresponding interpretation process. The last one is a hand drawn stroke analyzer that is able to interpret strokes progressively, directly while the user is drawing. We highlight in particular the human-computer interaction induced from this progressive interpretation process. Thanks to this generic approach, three pen-based system prototypes have already been developed, for musical score editing, for graph editing, and for UML class diagram editing
BioJS DAGViewer: A reusable JavaScript component for displaying directed graphs
Micklem, Gos
2014-01-01
Summary: The DAGViewer BioJS component is a reusable JavaScript component made available as part of the BioJS project and intended to be used to display graphs of structured data, with a particular emphasis on Directed Acyclic Graphs (DAGs). It enables users to embed representations of graphs of data, such as ontologies or phylogenetic trees, in hyper-text documents (HTML). This component is generic, since it is capable (given the appropriate configuration) of displaying any kind of data that is organised as a graph. The features of this component which are useful for examining and filtering large and complex graphs are described. Availability: http://github.com/alexkalderimis/dag-viewer-biojs; http://github.com/biojs/biojs; http://dx.doi.org/10.5281/zenodo.8303. PMID:24627804
An integrated aerodynamic/propulsion study for generic aero-space planes based on waverider concepts
NASA Technical Reports Server (NTRS)
Rasmussen, M. L.; Emanuel, George
1989-01-01
The design of a unified aero-space plane based on waverider technology is analyzed. The overall aerodynamic design and performance of an aero-space plane are discussed in terms of the forebody, scramjet, and afterbody. Other subjects considered in the study are combustion/nozzle optimization, the idealized tip-to-tail waverider model, and the two-dimensional minimum length nozzle. Charts and graphs are provided to show the results of the preliminary investigations.
Using graph theory to quantify coarse sediment connectivity in alpine geosystems
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Thiel, Markus; Schwanghart, Wolfgang; Haas, Florian; Becht, Michael
2010-05-01
Networks are a common object of study in various disciplines. Among others, informatics, sociology, transportation science, economics and ecology frequently deal with objects which are linked with other objects to form a network. Despite this wide thematic range, a coherent formal basis to represent, measure and model the relational structure of models exists. The mathematical model for networks of all kinds is a graph which can be analysed using the tools of mathematical graph theory. In a graph model of a generic system, system components are represented by graph nodes, and the linkages between them are formed by graph edges. The latter may represent all kinds of linkages, from matter or energy fluxes to functional relations. To some extent, graph theory has been used in geosciences and related disciplines; in hydrology and fluvial geomorphology, for example, river networks have been modeled and analysed as graphs. An important issue in hydrology is the hydrological connectivity which determines if runoff generated on some area reaches the channel network. In ecology, a number of graph-theoretical indices is applicable to describing the influence of habitat distribution and landscape fragmentation on population structure and species mobility. In these examples, the mobility of matter (water, sediment, animals) through a system is an important consequence of system structure, i.e. the location and topology of its components as well as of properties of linkages between them. In geomorphology, sediment connectivity relates to the potential of sediment particles to move through the catchment. As a system property, connectivity depends, for example, on the degree to which hillslopes within a catchment are coupled to the channel system (lateral coupling), and to which channel reaches are coupled to each other (longitudinal coupling). In the present study, numerical GIS-based models are used to investigate the coupling of geomorphic process units by delineating the process domains of important geomorphic processes in a high-mountain environment (rockfall, slope-type debris flows, slope aquatic and fluvial processes). The results are validated by field mapping; they show that only small parts of a catchment are actually coupled to its outlet with respect to coarse (bedload) sediment. The models not only generate maps of the spatial extent and geomorphic activity of the aforementioned processes, they also output so-called edge lists that can be converted to adjacency matrices and graphs. Graph theory is then employed to explore ‘local' (i.e. referring to single nodes or edges) and ‘global' (i.e. system-wide, referring to the whole graph) measures that can be used to quantify coarse sediment connectivity. Such a quantification will complement the mainly qualitative appraisal of coupling and connectivity; the effect of connectivity on catchment properties such as specific sediment yield and catchment sensitivity will then be studied on the basis of quantitative measures.
Graph C ∗-algebras and Z2-quotients of quantum spheres
NASA Astrophysics Data System (ADS)
Hajac, Piotr M.; Matthes, Rainer; Szymański, Wojciech
2003-06-01
We consider two Z2-actions on the Podleś generic quantum spheres. They yield, as noncommutative quotient spaces, the Klimek-Lesmewski q-disc and the quantum real projective space, respectively. The C ∗-algebas of all these quantum spaces are described as graph C ∗-algebras. The K-groups of the thus presented C ∗-algebras are then easily determined from the general theory of graph C ∗-algebas. For the quantum real projective space, we also recall the classification of the classes of irreducible ∗-representations of its algebra and give a linear basis for this algebra.
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P.; Gerstein, Mark
2010-01-01
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems. PMID:20439753
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark
2010-05-18
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.
Jooya, Hossein Z.; Reihani, Kamran; Chu, Shih-I
2016-11-21
We propose a graph-theoretical formalism to study generic circuit quantum electrodynamics systems consisting of a two level qubit coupled with a single-mode resonator in arbitrary coupling strength regimes beyond rotating-wave approximation. We define colored-weighted graphs, and introduce different products between them to investigate the dynamics of superconducting qubits in transverse, longitudinal, and bidirectional coupling schemes. In conclusion, the intuitive and predictive picture provided by this method, and the simplicity of the mathematical construction, are demonstrated with some numerical studies of the multiphoton resonance processes and quantum interference phenomena for the superconducting qubit systems driven by intense ac fields.
Massive Scale Cyber Traffic Analysis: A Driver for Graph Database Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Choudhury, S.; Haglin, David J.
2013-06-19
We describe the significance and prominence of network traffic analysis (TA) as a graph- and network-theoretical domain for advancing research in graph database systems. TA involves observing and analyzing the connections between clients, servers, hosts, and actors within IP networks, both at particular times and as extended over times. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. IPFLOW databases are routinely interrogated statistically and visualized for suspicious patterns. But the ability to cast IPFLOW data as a massive graph and query itmore » interactively, in order to e.g.\\ identify connectivity patterns, is less well advanced, due to a number of factors including scaling, and their hybrid nature combining graph connectivity and quantitative attributes. In this paper, we outline requirements and opportunities for graph-structured IPFLOW analytics based on our experience with real IPFLOW databases. Specifically, we describe real use cases from the security domain, cast them as graph patterns, show how to express them in two graph-oriented query languages SPARQL and Datalog, and use these examples to motivate a new class of "hybrid" graph-relational systems.« less
Decentralized and self-centered estimation architecture for formation flying of spacecraft
NASA Technical Reports Server (NTRS)
Kang, B. H.; Hadaegh, F. Y.; Scharf, D. P.; Ke, N. -P.
2001-01-01
Formation estimation methodologies for distributed spacecraft systems are formulated and analyzed. A generic form of the formation estimation problem is described by defining a common hardware configuration, observation graph, and feasible estimation topologies.
Efficient dynamic graph construction for inductive semi-supervised learning.
Dornaika, F; Dahbi, R; Bosaghzadeh, A; Ruichek, Y
2017-10-01
Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph. As a case study, we use the recently proposed Two Phase Weighted Regularized Least Square (TPWRLS) graph construction method. The paper has two main contributions. First, we use the TPWRLS coding scheme to represent new sample(s) with respect to an existing database. The representative coefficients are then used to update the graph affinity matrix. The proposed method not only appends the new samples to the graph but also updates the whole graph structure by discovering which nodes are affected by the introduction of new samples and by updating their edge weights. The second contribution of the article is the application of the proposed framework to the problem of graph-based label propagation using multiple observations for vision-based recognition tasks. Experiments on several image databases show that, without any significant loss in the accuracy of the final classification, the proposed dynamic graph construction is more efficient than the batch graph construction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Information fusion-based approach for studying influence on Twitter using belief theory.
Azaza, Lobna; Kirgizov, Sergey; Savonnet, Marinette; Leclercq, Éric; Gastineau, Nicolas; Faiz, Rim
2016-01-01
Influence in Twitter has become recently a hot research topic, since this micro-blogging service is widely used to share and disseminate information. Some users are more able than others to influence and persuade peers. Thus, studying most influential users leads to reach a large-scale information diffusion area, something very useful in marketing or political campaigns. In this study, we propose a new approach for multi-level influence assessment on multi-relational networks, such as Twitter . We define a social graph to model the relationships between users as a multiplex graph where users are represented by nodes, and links model the different relations between them (e.g., retweets , mentions , and replies ). We explore how relations between nodes in this graph could reveal about the influence degree and propose a generic computational model to assess influence degree of a certain node. This is based on the conjunctive combination rule from the belief functions theory to combine different types of relations. We experiment the proposed method on a large amount of data gathered from Twitter during the European Elections 2014 and deduce top influential candidates. The results show that our model is flexible enough to to consider multiple interactions combination according to social scientists needs or requirements and that the numerical results of the belief theory are accurate. We also evaluate the approach over the CLEF RepLab 2014 data set and show that our approach leads to quite interesting results.
Loop expansion around the Bethe approximation through the M-layer construction
NASA Astrophysics Data System (ADS)
Altieri, Ada; Chiara Angelini, Maria; Lucibello, Carlo; Parisi, Giorgio; Ricci-Tersenghi, Federico; Rizzo, Tommaso
2017-11-01
For every physical model defined on a generic graph or factor graph, the Bethe M-layer construction allows building a different model for which the Bethe approximation is exact in the large M limit, and coincides with the original model for M=1 . The 1/M perturbative series is then expressed by a diagrammatic loop expansion in terms of so-called fat diagrams. Our motivation is to study some important second-order phase transitions that do exist on the Bethe lattice, but are either qualitatively different or absent in the corresponding fully connected case. In this case, the standard approach based on a perturbative expansion around the naive mean field theory (essentially a fully connected model) fails. On physical grounds, we expect that when the construction is applied to a lattice in finite dimension there is a small region of the external parameters, close to the Bethe critical point, where strong deviations from mean-field behavior will be observed. In this region, the 1/M expansion for the corrections diverges, and can be the starting point for determining the correct non-mean-field critical exponents using renormalization group arguments. In the end, we will show that the critical series for the generic observable can be expressed as a sum of Feynman diagrams with the same numerical prefactors of field theories. However, the contribution of a given diagram is not evaluated by associating Gaussian propagators to its lines, as in field theories: one has to consider the graph as a portion of the original lattice, replacing the internal lines with appropriate one-dimensional chains, and attaching to the internal points the appropriate number of infinite-size Bethe trees to restore the correct local connectivity of the original model. The actual contribution of each (fat) diagram is the so-called line-connected observable, which also includes contributions from sub-diagrams with appropriate prefactors. In order to compute the corrections near to the critical point, Feynman diagrams (with their symmetry factors) can be read directly from the appropriate field-theoretical literature; the computation of momentum integrals is also quite similar; the extra work consists of computing the line-connected observable of the associated fat diagram in the limit of all lines becoming infinitely long.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, J.; Hagberg, A.; Srinivasan, G.; Mohd-Yusof, J.; Viswanathan, H. S.
2017-12-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, Jeffrey D.; Hagberg, Aric; Srinivasan, Gowri; Mohd-Yusof, Jamaludin; Viswanathan, Hari
2017-07-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
Gain-Sparsity and Symmetry-Forced Rigidity in the Plane.
Jordán, Tibor; Kaszanitzky, Viktória E; Tanigawa, Shin-Ichi
We consider planar bar-and-joint frameworks with discrete point group symmetry in which the joint positions are as generic as possible subject to the symmetry constraint. We provide combinatorial characterizations for symmetry-forced rigidity of such structures with rotation symmetry or dihedral symmetry of order 2 k with odd k , unifying and extending previous work on this subject. We also explore the matroidal background of our results and show that the matroids induced by the row independence of the orbit matrices of the symmetric frameworks are isomorphic to gain sparsity matroids defined on the quotient graph of the framework, whose edges are labeled by elements of the corresponding symmetry group. The proofs are based on new Henneberg type inductive constructions of the gain graphs that correspond to the bases of the matroids in question, which can also be seen as symmetry preserving graph operations in the original graph.
Toward An Unstructured Mesh Database
NASA Astrophysics Data System (ADS)
Rezaei Mahdiraji, Alireza; Baumann, Peter Peter
2014-05-01
Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi-incidence relationships. We instrument ImG model with sets of optional and application-specific constraints which can be used to check validity of meshes for a specific class of object such as manifold, pseudo-manifold, and simplicial manifold. We conducted experiments to measure the performance of the graph database solution in processing mesh queries and compare it with GrAL mesh library and PostgreSQL database on synthetic and real mesh datasets. The experiments show that each system perform well on specific types of mesh queries, e.g., graph databases perform well on global path-intensive queries. In the future, we investigate database operations for the ImG model and design a mesh query language.
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
Generic patterns in the evolution of urban water networks: Evidence from a large Asian city
NASA Astrophysics Data System (ADS)
Krueger, Elisabeth; Klinkhamer, Christopher; Urich, Christian; Zhan, Xianyuan; Rao, P. Suresh C.
2017-03-01
We examine high-resolution urban infrastructure data using every pipe for the water distribution network (WDN) and sanitary sewer network (SSN) in a large Asian city (≈4 million residents) to explore the structure as well as the spatial and temporal evolution of these infrastructure networks. Network data were spatially disaggregated into multiple subnets to examine intracity topological differences for functional zones of the WDN and SSN, and time-stamped SSN data were examined to understand network evolution over several decades as the city expanded. Graphs were generated using a dual-mapping technique (Hierarchical Intersection Continuity Negotiation), which emphasizes the functional attributes of these networks. Network graphs for WDNs and SSNs are characterized by several network topological metrics, and a double Pareto (power-law) model approximates the node-degree distributions of both water infrastructure networks (WDN and SSN), across spatial and hierarchical scales relevant to urban settings, and throughout their temporal evolution over several decades. These results indicate that generic mechanisms govern the networks' evolution, similar to those of scale-free networks found in nature. Deviations from the general topological patterns are indicative of (1) incomplete establishment of network hierarchies and functional network evolution, (2) capacity for growth (expansion) or densification (e.g., in-fill), and (3) likely network vulnerabilities. We discuss the implications of our findings for the (re-)design of urban infrastructure networks to enhance their resilience to external and internal threats.
Evolutionary optimization of biopolymers and sequence structure maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reidys, C.M.; Kopp, S.; Schuster, P.
1996-06-01
Searching for biopolymers having a predefined function is a core problem of biotechnology, biochemistry and pharmacy. On the level of RNA sequences and their corresponding secondary structures we show that this problem can be analyzed mathematically. The strategy will be to study the properties of the RNA sequence to secondary structure mapping that is essential for the understanding of the search process. We show that to each secondary structure s there exists a neutral network consisting of all sequences folding into s. This network can be modeled as a random graph and has the following generic properties: it is densemore » and has a giant component within the graph of compatible sequences. The neutral network percolates sequence space and any two neutral nets come close in terms of Hamming distance. We investigate the distribution of the orders of neutral nets and show that above a certain threshold the topology of neutral nets allows to find practically all frequent secondary structures.« less
A Network Design Architecture for Distribution of Generic Scene Graphs
1999-09-01
with UML. Addison Wesley. Deitel, H. and Deitel, P. 1994. C++ How to Program . Prentice Hall. Deitel, H. and Deitel, P. 1998. JAVA How ... to . Program . Prentice.Hall. Eckel, B. 1998. Thinking in JAVA. Prentice Hall. 141 Edwards, J. 1997. 3-Tier Client/Server At Work. John
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin; ...
2017-07-10
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.
2012-01-01
An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.
Mining chemical reactions using neighborhood behavior and condensed graphs of reactions approaches.
de Luca, Aurélie; Horvath, Dragos; Marcou, Gilles; Solov'ev, Vitaly; Varnek, Alexandre
2012-09-24
This work addresses the problem of similarity search and classification of chemical reactions using Neighborhood Behavior (NB) and Condensed Graphs of Reaction (CGR) approaches. The CGR formalism represents chemical reactions as a classical molecular graph with dynamic bonds, enabling descriptor calculations on this graph. Different types of the ISIDA fragment descriptors generated for CGRs in combination with two metrics--Tanimoto and Euclidean--were considered as chemical spaces, to serve for reaction dissimilarity scoring. The NB method has been used to select an optimal combination of descriptors which distinguish different types of chemical reactions in a database containing 8544 reactions of 9 classes. Relevance of NB analysis has been validated in generic (multiclass) similarity search and in clustering with Self-Organizing Maps (SOM). NB-compliant sets of descriptors were shown to display enhanced mapping propensities, allowing the construction of better Self-Organizing Maps and similarity searches (NB and classical similarity search criteria--AUC ROC--correlate at a level of 0.7). The analysis of the SOM clusters proved chemically meaningful CGR substructures representing specific reaction signatures.
Building Specialized Multilingual Lexical Graphs Using Community Resources
NASA Astrophysics Data System (ADS)
Daoud, Mohammad; Boitet, Christian; Kageura, Kyo; Kitamoto, Asanobu; Mangeot, Mathieu; Daoud, Daoud
We are describing methods for compiling domain-dedicated multilingual terminological data from various resources. We focus on collecting data from online community users as a main source, therefore, our approach depends on acquiring contributions from volunteers (explicit approach), and it depends on analyzing users' behaviors to extract interesting patterns and facts (implicit approach). As a generic repository that can handle the collected multilingual terminological data, we are describing the concept of dedicated Multilingual Preterminological Graphs MPGs, and some automatic approaches for constructing them by analyzing the behavior of online community users. A Multilingual Preterminological Graph is a special lexical resource that contains massive amount of terms related to a special domain. We call it preterminological, because it is a raw material that can be used to build a standardized terminological repository. Building such a graph is difficult using traditional approaches, as it needs huge efforts by domain specialists and terminologists. In our approach, we build such a graph by analyzing the access log files of the website of the community, and by finding the important terms that have been used to search in that website, and their association with each other. We aim at making this graph as a seed repository so multilingual volunteers can contribute. We are experimenting this approach with the Digital Silk Road Project. We have used its access log files since its beginning in 2003, and obtained an initial graph of around 116000 terms. As an application, we used this graph to obtain a preterminological multilingual database that is serving a CLIR system for the DSR project.
Black holes as quantum gravity condensates
NASA Astrophysics Data System (ADS)
Oriti, Daniele; Pranzetti, Daniele; Sindoni, Lorenzo
2018-03-01
We model spherically symmetric black holes within the group field theory formalism for quantum gravity via generalized condensate states, involving sums over arbitrarily refined graphs (dual to three-dimensional triangulations). The construction relies heavily on both the combinatorial tools of random tensor models and the quantum geometric data of loop quantum gravity, both part of the group field theory formalism. Armed with the detailed microscopic structure, we compute the entropy associated with the black hole horizon, which turns out to be equivalently the Boltzmann entropy of its microscopic degrees of freedom and the entanglement entropy between the inside and outside regions. We recover the area law under very general conditions, as well as the Bekenstein-Hawking formula. The result is also shown to be generically independent of any specific value of the Immirzi parameter.
Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; ...
2015-09-21
We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phasemore » space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.« less
Superpixel Cut for Figure-Ground Image Segmentation
NASA Astrophysics Data System (ADS)
Yang, Michael Ying; Rosenhahn, Bodo
2016-06-01
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
Tang, Y; Stephenson, J L; Othmer, H G
1996-01-01
We study the models for calcium (Ca) dynamics developed in earlier studies, in each of which the key component is the kinetics of intracellular inositol-1,4,5-trisphosphate-sensitive Ca channels. After rapidly equilibrating steps are eliminated, the channel kinetics in these models are represented by a single differential equation that is linear in the state of the channel. In the reduced kinetic model, the graph of the steady-state fraction of conducting channels as a function of log10(Ca) is a bell-shaped curve. Dynamically, a step increase in inositol-1,4,5-trisphosphate induces an incremental increase in the fraction of conducting channels, whereas a step increase in Ca can either potentiate or inhibit channel activation, depending on the Ca level before and after the increase. The relationships among these models are discussed, and experimental tests to distinguish between them are given. Under certain conditions the models for intracellular calcium dynamics are reduced to the singular perturbed form epsilon dx/d tau = f(x, y, p), dy/d tau = g(x, y, p). Phase-plane analysis is applied to a generic form of these simplified models to show how different types of Ca response, such as excitability, oscillations, and a sustained elevation of Ca, can arise. The generic model can also be used to study frequency encoding of hormonal stimuli, to determine the conditions for stable traveling Ca waves, and to understand the effect of channel properties on the wave speed.
Structural self-assembly and avalanchelike dynamics in locally adaptive networks
NASA Astrophysics Data System (ADS)
Gräwer, Johannes; Modes, Carl D.; Magnasco, Marcelo O.; Katifori, Eleni
2015-07-01
Transport networks play a key role across four realms of eukaryotic life: slime molds, fungi, plants, and animals. In addition to the developmental algorithms that build them, many also employ adaptive strategies to respond to stimuli, damage, and other environmental changes. We model these adapting network architectures using a generic dynamical system on weighted graphs and find in simulation that these networks ultimately develop a hierarchical organization of the final weighted architecture accompanied by the formation of a system-spanning backbone. In addition, we find that the long term equilibration dynamics exhibit behavior reminiscent of glassy systems characterized by long periods of slow changes punctuated by bursts of reorganization events.
Xander: employing a novel method for efficient gene-targeted metagenomic assembly
Wang, Qiong; Fish, Jordan A.; Gilman, Mariah; ...
2015-08-05
Here, metagenomics can provide important insight into microbial communities. However, assembling metagenomic datasets has proven to be computationally challenging. Current methods often assemble only fragmented partial genes. We present a novel method for targeting assembly of specific protein-coding genes. This method combines a de Bruijn graph, as used in standard assembly approaches, and a protein profile hidden Markov model (HMM) for the gene of interest, as used in standard annotation approaches. These are used to create a novel combined weighted assembly graph. Xander performs both assembly and annotation concomitantly using information incorporated in this graph. We demonstrate the utility ofmore » this approach by assembling contigs for one phylogenetic marker gene and for two functional marker genes, first on Human Microbiome Project (HMP)-defined community Illumina data and then on 21 rhizosphere soil metagenomic datasets from three different crops totaling over 800 Gbp of unassembled data. We compared our method to a recently published bulk metagenome assembly method and a recently published gene-targeted assembler and found our method produced more, longer, and higher quality gene sequences. In conclusion, xander combines gene assignment with the rapid assembly of full-length or near full-length functional genes from metagenomic data without requiring bulk assembly or post-processing to find genes of interest. HMMs used for assembly can be tailored to the targeted genes, allowing flexibility to improve annotation over generic annotation pipelines.« less
Fisher metric, geometric entanglement, and spin networks
NASA Astrophysics Data System (ADS)
Chirco, Goffredo; Mele, Fabio M.; Oriti, Daniele; Vitale, Patrizia
2018-02-01
Starting from recent results on the geometric formulation of quantum mechanics, we propose a new information geometric characterization of entanglement for spin network states in the context of quantum gravity. For the simple case of a single-link fixed graph (Wilson line), we detail the construction of a Riemannian Fisher metric tensor and a symplectic structure on the graph Hilbert space, showing how these encode the whole information about separability and entanglement. In particular, the Fisher metric defines an entanglement monotone which provides a notion of distance among states in the Hilbert space. In the maximally entangled gauge-invariant case, the entanglement monotone is proportional to a power of the area of the surface dual to the link thus supporting a connection between entanglement and the (simplicial) geometric properties of spin network states. We further extend such analysis to the study of nonlocal correlations between two nonadjacent regions of a generic spin network graph characterized by the bipartite unfolding of an intertwiner state. Our analysis confirms the interpretation of spin network bonds as a result of entanglement and to regard the same spin network graph as an information graph, whose connectivity encodes, both at the local and nonlocal level, the quantum correlations among its parts. This gives a further connection between entanglement and geometry.
A novel framework for command and control of networked sensor systems
NASA Astrophysics Data System (ADS)
Chen, Genshe; Tian, Zhi; Shen, Dan; Blasch, Erik; Pham, Khanh
2007-04-01
In this paper, we have proposed a highly innovative advanced command and control framework for sensor networks used for future Integrated Fire Control (IFC). The primary goal is to enable and enhance target detection, validation, and mitigation for future military operations by graphical game theory and advanced knowledge information fusion infrastructures. The problem is approached by representing distributed sensor and weapon systems as generic warfare resources which must be optimized in order to achieve the operational benefits afforded by enabling a system of systems. This paper addresses the importance of achieving a Network Centric Warfare (NCW) foundation of information superiority-shared, accurate, and timely situational awareness upon which advanced automated management aids for IFC can be built. The approach uses the Data Fusion Information Group (DFIG) Fusion hierarchy of Level 0 through Level 4 to fuse the input data into assessments for the enemy target system threats in a battlespace to which military force is being applied. Compact graph models are employed across all levels of the fusion hierarchy to accomplish integrative data fusion and information flow control, as well as cross-layer sensor management. The functional block at each fusion level will have a set of innovative algorithms that not only exploit the corresponding graph model in a computationally efficient manner, but also permit combined functional experiments across levels by virtue of the unifying graphical model approach.
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
Effective centrality and explosive synchronization in complex networks
NASA Astrophysics Data System (ADS)
Navas, A.; Villacorta-Atienza, J. A.; Leyva, I.; Almendral, J. A.; Sendiña-Nadal, I.; Boccaletti, S.
2015-12-01
Synchronization of networked oscillators is known to depend fundamentally on the interplay between the dynamics of the graph's units and the microscopic arrangement of the network's structure. We here propose an effective network whose topological properties reflect the interplay between the topology and dynamics of the original network. On that basis, we are able to introduce the effective centrality, a measure that quantifies the role and importance of each network's node in the synchronization process. In particular, in the context of explosive synchronization, we use such a measure to assess the propensity of a graph to sustain an irreversible transition to synchronization. We furthermore discuss a strategy to induce the explosive behavior in a generic network, by acting only upon a fraction of its nodes.
Cycle/Cocycle Oblique Projections on Oriented Graphs
NASA Astrophysics Data System (ADS)
Polettini, Matteo
2015-01-01
It is well known that the edge vector space of an oriented graph can be decomposed in terms of cycles and cocycles (alias cuts, or bonds), and that a basis for the cycle and the cocycle spaces can be generated by adding and removing edges to an arbitrarily chosen spanning tree. In this paper, we show that the edge vector space can also be decomposed in terms of cycles and the generating edges of cocycles (called cochords), or of cocycles and the generating edges of cycles (called chords). From this observation follows a construction in terms of oblique complementary projection operators. We employ this algebraic construction to prove several properties of unweighted Kirchhoff-Symanzik matrices, encoding the mutual superposition between cycles and cocycles. In particular, we prove that dual matrices of planar graphs have the same spectrum (up to multiplicities). We briefly comment on how this construction provides a refined formalization of Kirchhoff's mesh analysis of electrical circuits, which has lately been applied to generic thermodynamic networks.
Magic bases, metric ansaetze and generalized graph theories in the Virasoro master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halpern, M.B.; Obers, N.A.
1991-11-15
The authors define a class of magic Lie group bases in which the Virasoro master equation admits a class of simple metric ansaetze (g{sub metric}), whose structure is visible in the high-level expansion. When a magic basis is real on compact g, the corresponding g{sub metric} is a large system of unitary, generically irrational conformal field theories. Examples in this class include the graph-theory ansatz SO(n){sub diag} in the Cartesian basis of So(n) and the ansatz SU(n){sub metric} in the Pauli-like basis of SU(n). A new phenomenon is observed in the high-level comparison of SU(n){sub metric}: Due to the trigonometricmore » structure constants of the Pauli-like basis, irrational central charge is clearly visible at finite order of the expansion. They also define the sine-area graphs of SU(n), which label the conformal field theories of SU(n){sub metric} and note that, in a similar fashion, each magic basis of g defines a generalize graph theory on g which labels the conformal field theories of g{sub metric}.« less
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
Visibility graphs of random scalar fields and spatial data
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Iacovacci, Jacopo
2017-07-01
We extend the family of visibility algorithms to map scalar fields of arbitrary dimension into graphs, enabling the analysis of spatially extended data structures as networks. We introduce several possible extensions and provide analytical results on the topological properties of the graphs associated to different types of real-valued matrices, which can be understood as the high and low disorder limits of real-valued scalar fields. In particular, we find a closed expression for the degree distribution of these graphs associated to uncorrelated random fields of generic dimension. This result holds independently of the field's marginal distribution and it directly yields a statistical randomness test, applicable in any dimension. We showcase its usefulness by discriminating spatial snapshots of two-dimensional white noise from snapshots of a two-dimensional lattice of diffusively coupled chaotic maps, a system that generates high dimensional spatiotemporal chaos. The range of potential applications of this combinatorial framework includes image processing in engineering, the description of surface growth in material science, soft matter or medicine, and the characterization of potential energy surfaces in chemistry, disordered systems, and high energy physics. An illustration on the applicability of this method for the classification of the different stages involved in carcinogenesis is briefly discussed.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
Xander: employing a novel method for efficient gene-targeted metagenomic assembly.
Wang, Qiong; Fish, Jordan A; Gilman, Mariah; Sun, Yanni; Brown, C Titus; Tiedje, James M; Cole, James R
2015-01-01
Metagenomics can provide important insight into microbial communities. However, assembling metagenomic datasets has proven to be computationally challenging. Current methods often assemble only fragmented partial genes. We present a novel method for targeting assembly of specific protein-coding genes. This method combines a de Bruijn graph, as used in standard assembly approaches, and a protein profile hidden Markov model (HMM) for the gene of interest, as used in standard annotation approaches. These are used to create a novel combined weighted assembly graph. Xander performs both assembly and annotation concomitantly using information incorporated in this graph. We demonstrate the utility of this approach by assembling contigs for one phylogenetic marker gene and for two functional marker genes, first on Human Microbiome Project (HMP)-defined community Illumina data and then on 21 rhizosphere soil metagenomic datasets from three different crops totaling over 800 Gbp of unassembled data. We compared our method to a recently published bulk metagenome assembly method and a recently published gene-targeted assembler and found our method produced more, longer, and higher quality gene sequences. Xander combines gene assignment with the rapid assembly of full-length or near full-length functional genes from metagenomic data without requiring bulk assembly or post-processing to find genes of interest. HMMs used for assembly can be tailored to the targeted genes, allowing flexibility to improve annotation over generic annotation pipelines. This method is implemented as open source software and is available at https://github.com/rdpstaff/Xander_assembler.
Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng
2015-12-01
Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.
Graph Theory Roots of Spatial Operators for Kinematics and Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan
2011-01-01
Spatial operators have been used to analyze the dynamics of robotic multibody systems and to develop novel computational dynamics algorithms. Mass matrix factorization, inversion, diagonalization, and linearization are among several new insights obtained using such operators. While initially developed for serial rigid body manipulators, the spatial operators and the related mathematical analysis have been shown to extend very broadly including to tree and closed topology systems, to systems with flexible joints, links, etc. This work uses concepts from graph theory to explore the mathematical foundations of spatial operators. The goal is to study and characterize the properties of the spatial operators at an abstract level so that they can be applied to a broader range of dynamics problems. The rich mathematical properties of the kinematics and dynamics of robotic multibody systems has been an area of strong research interest for several decades. These properties are important to understand the inherent physical behavior of systems, for stability and control analysis, for the development of computational algorithms, and for model development of faithful models. Recurring patterns in spatial operators leads one to ask the more abstract question about the properties and characteristics of spatial operators that make them so broadly applicable. The idea is to step back from the specific application systems, and understand more deeply the generic requirements and properties of spatial operators, so that the insights and techniques are readily available across different kinematics and dynamics problems. In this work, techniques from graph theory were used to explore the abstract basis for the spatial operators. The close relationship between the mathematical properties of adjacency matrices for graphs and those of spatial operators and their kernels were established. The connections hold across very basic requirements on the system topology, the nature of the component bodies, the indexing schemes, etc. The relationship of the underlying structure is intimately connected with efficient, recursive computational algorithms. The results provide the foundational groundwork for a much broader look at the key problems in kinematics and dynamics. The properties of general graphs and trees of nodes and edge were examined, as well as the properties of adjacency matrices that are used to describe graph connectivity. The nilpotency property of such matrices for directed trees was reviewed, and the adjacency matrices were generalized to the notion of block weighted adjacency matrices that support block matrix elements. This leads us to the development of the notion of Spatial Kernel Operator SKO kernels. These kernels provide the basis for the development of SKO resolvent operators.
Stochastic cycle selection in active flow networks.
Woodhouse, Francis G; Forrow, Aden; Fawcett, Joanna B; Dunkel, Jörn
2016-07-19
Active biological flow networks pervade nature and span a wide range of scales, from arterial blood vessels and bronchial mucus transport in humans to bacterial flow through porous media or plasmodial shuttle streaming in slime molds. Despite their ubiquity, little is known about the self-organization principles that govern flow statistics in such nonequilibrium networks. Here we connect concepts from lattice field theory, graph theory, and transition rate theory to understand how topology controls dynamics in a generic model for actively driven flow on a network. Our combined theoretical and numerical analysis identifies symmetry-based rules that make it possible to classify and predict the selection statistics of complex flow cycles from the network topology. The conceptual framework developed here is applicable to a broad class of biological and nonbiological far-from-equilibrium networks, including actively controlled information flows, and establishes a correspondence between active flow networks and generalized ice-type models.
Stochastic cycle selection in active flow networks
NASA Astrophysics Data System (ADS)
Woodhouse, Francis; Forrow, Aden; Fawcett, Joanna; Dunkel, Jorn
2016-11-01
Active biological flow networks pervade nature and span a wide range of scales, from arterial blood vessels and bronchial mucus transport in humans to bacterial flow through porous media or plasmodial shuttle streaming in slime molds. Despite their ubiquity, little is known about the self-organization principles that govern flow statistics in such non-equilibrium networks. By connecting concepts from lattice field theory, graph theory and transition rate theory, we show how topology controls dynamics in a generic model for actively driven flow on a network. Through theoretical and numerical analysis we identify symmetry-based rules to classify and predict the selection statistics of complex flow cycles from the network topology. Our conceptual framework is applicable to a broad class of biological and non-biological far-from-equilibrium networks, including actively controlled information flows, and establishes a new correspondence between active flow networks and generalized ice-type models.
Experimental results for a hypersonic nozzle/afterbody flow field
NASA Technical Reports Server (NTRS)
Spaid, Frank W.; Keener, Earl R.; Hui, Frank C. L.
1995-01-01
This study was conducted to experimentally characterize the flow field created by the interaction of a single-expansion ramp-nozzle (SERN) flow with a hypersonic external stream. Data were obtained from a generic nozzle/afterbody model in the 3.5 Foot Hypersonic Wind Tunnel at the NASA Ames Research Center, in a cooperative experimental program involving Ames and McDonnell Douglas Aerospace. The model design and test planning were performed in close cooperation with members of the Ames computational fluid dynamics (CFD) team for the National Aerospace Plane (NASP) program. This paper presents experimental results consisting of oil-flow and shadow graph flow-visualization photographs, afterbody surface-pressure distributions, rake boundary-layer measurements, Preston-tube skin-friction measurements, and flow field surveys with five-hole and thermocouple probes. The probe data consist of impact pressure, flow direction, and total temperature profiles in the interaction flow field.
Stochastic cycle selection in active flow networks
Woodhouse, Francis G.; Forrow, Aden; Fawcett, Joanna B.; Dunkel, Jörn
2016-01-01
Active biological flow networks pervade nature and span a wide range of scales, from arterial blood vessels and bronchial mucus transport in humans to bacterial flow through porous media or plasmodial shuttle streaming in slime molds. Despite their ubiquity, little is known about the self-organization principles that govern flow statistics in such nonequilibrium networks. Here we connect concepts from lattice field theory, graph theory, and transition rate theory to understand how topology controls dynamics in a generic model for actively driven flow on a network. Our combined theoretical and numerical analysis identifies symmetry-based rules that make it possible to classify and predict the selection statistics of complex flow cycles from the network topology. The conceptual framework developed here is applicable to a broad class of biological and nonbiological far-from-equilibrium networks, including actively controlled information flows, and establishes a correspondence between active flow networks and generalized ice-type models. PMID:27382186
Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs
2014-06-01
comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree
Information visualisation based on graph models
NASA Astrophysics Data System (ADS)
Kasyanov, V. N.; Kasyanova, E. V.
2013-05-01
Information visualisation is a key component of support tools for many applications in science and engineering. A graph is an abstract structure that is widely used to model information for its visualisation. In this paper, we consider practical and general graph formalism called hierarchical graphs and present the Higres and Visual Graph systems aimed at supporting information visualisation on the base of hierarchical graph models.
A generalized approach to complex networks
NASA Astrophysics Data System (ADS)
Costa, L. Da F.; da Rocha, L. E. C.
2006-03-01
This work describes how the formalization of complex network concepts in terms of discrete mathematics, especially mathematical morphology, allows a series of generalizations and important results ranging from new measurements of the network topology to new network growth models. First, the concepts of node degree and clustering coefficient are extended in order to characterize not only specific nodes, but any generic subnetwork. Second, the consideration of distance transform and rings are used to further extend those concepts in order to obtain a signature, instead of a single scalar measurement, ranging from the single node to whole graph scales. The enhanced discriminative potential of such extended measurements is illustrated with respect to the identification of correspondence between nodes in two complex networks, namely a protein-protein interaction network and a perturbed version of it.
Graph fibrations and symmetries of network dynamics
NASA Astrophysics Data System (ADS)
Nijholt, Eddie; Rink, Bob; Sanders, Jan
2016-11-01
Dynamical systems with a network structure can display remarkable phenomena such as synchronisation and anomalous synchrony breaking. A methodology for classifying patterns of synchrony in networks was developed by Golubitsky and Stewart. They showed that the robustly synchronous dynamics of a network is determined by its quotient networks. This result was recently reformulated by DeVille and Lerman, who pointed out that the reduction from a network to a quotient is an example of a graph fibration. The current paper exploits this observation and demonstrates the importance of self-fibrations of network graphs. Self-fibrations give rise to symmetries in the dynamics of a network. We show that every network admits a lift with a semigroup or semigroupoid of self-fibrations. The resulting symmetries impact the global dynamics of the network and can therefore be used to explain and predict generic scenarios for synchrony breaking. Also, when the network has a trivial symmetry groupoid, then every robust synchrony in the lift is determined by symmetry. We finish this paper with a discussion of networks with interior symmetries and nonhomogeneous networks.
Bioequivalence of generic and branded amoxicillin capsules in healthy human volunteers
Pathak, Priyanka; Pandit, Vijaya A.; Dhande, Priti P.
2017-01-01
CONTEXT: The Medical Council of India urges doctors to prescribe generic drugs as far as possible. The Indian Medical Association had responded earlier saying that it requires guarantees on the quality of generic forms of drugs. Although no published scientific reports are available on the issue of therapeutic inequivalence, unconfirmed clinician accounts and newspaper reports of therapeutic inequivalence exist. AIM: This study was planned to ascertain whether bioequivalence of branded and generic amoxicillin capsule is comparable. SETTINGS AND DESIGN: An open-label, randomized, single-dose, two-treatment, two-sequence, two-period crossover oral bioequivalence study was conducted in 12 healthy, adult human subjects under fasting condition. MATERIALS AND METHODS: Serum samples, collected at 8 time points, were analyzed by a validated ultraviolet spectrophotometer method. Pharmacokinetic (PK) parameters such as area under the curve (AUC)0–t, AUC0–∞, Cmax, and Tmax were determined along with time above minimum inhibitory concentration (MIC). STATISTICAL ANALYSIS USED: The log-transformed PK parameters (Cmax, AUC0–t, AUC0–∞) were analyzed using a Two One-Sided Test ANOVA in SAS for each parameter. Tmax and MIC were analyzed by Wilcoxon rank-sum test in GraphPad Prism. RESULTS: Geometric mean ratio of Cmax fell within bioequivalence criteria. The upper and lower confidence limits of both AUC0–t and AUC0–∞ geometric mean ratio fell below bioequivalence criteria. Time above MIC of generic preparation was significantly lower than that of branded version. CONCLUSIONS: The generic capsule was not bioequivalent to the branded amoxicillin capsule. PMID:28706331
Albà, Xènia; Figueras I Ventura, Rosa M; Lekadir, Karim; Tobon-Gomez, Catalina; Hoogendoorn, Corné; Frangi, Alejandro F
2014-12-01
Magnetic resonance imaging (MRI), specifically late-enhanced MRI, is the standard clinical imaging protocol to assess cardiac viability. Segmentation of myocardial walls is a prerequisite for this assessment. Automatic and robust multisequence segmentation is required to support processing massive quantities of data. A generic rule-based framework to automatically segment the left ventricle myocardium is presented here. We use intensity information, and include shape and interslice smoothness constraints, providing robustness to subject- and study-specific changes. Our automatic initialization considers the geometrical and appearance properties of the left ventricle, as well as interslice information. The segmentation algorithm uses a decoupled, modified graph cut approach with control points, providing a good balance between flexibility and robustness. The method was evaluated on late-enhanced MRI images from a 20-patient in-house database, and on cine-MRI images from a 15-patient open access database, both using as reference manually delineated contours. Segmentation agreement, measured using the Dice coefficient, was 0.81±0.05 and 0.92±0.04 for late-enhanced MRI and cine-MRI, respectively. The method was also compared favorably to a three-dimensional Active Shape Model approach. The experimental validation with two magnetic resonance sequences demonstrates increased accuracy and versatility. © 2013 Wiley Periodicals, Inc.
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
NASA Astrophysics Data System (ADS)
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
Math anxiety: Brain cortical network changes in anticipation of doing mathematics.
Klados, Manousos A; Pandria, Niki; Micheloyannis, Sifis; Margulies, Daniel; Bamidis, Panagiotis D
2017-12-01
Following our previous work regarding the involvement of math anxiety (MA) in math-oriented tasks, this study tries to explore the differences in the cerebral networks' topology between self-reported low math-anxious (LMA) and high math-anxious (HMA) individuals, during the anticipation phase prior to a mathematical related experiment. For this reason, multichannel EEG recordings were adopted, while the solution of the inverse problem was applied in a generic head model, in order to obtain the cortical signals. The cortical networks have been computed for each band separately, using the magnitude square coherence metric. The main graph theoretical parameters, showed differences in segregation and integration in almost all EEG bands of the HMAs in comparison to LMAs, indicative of a great influence of the anticipatory anxiety prior to mathematical performance. Copyright © 2017 Elsevier B.V. All rights reserved.
A vision-based approach for tramway rail extraction
NASA Astrophysics Data System (ADS)
Zwemer, Matthijs H.; van de Wouw, Dennis W. J. M.; Jaspers, Egbert; Zinger, Sveta; de With, Peter H. N.
2015-03-01
The growing traffic density in cities fuels the desire for collision assessment systems on public transportation. For this application, video analysis is broadly accepted as a cornerstone. For trams, the localization of tramway tracks is an essential ingredient of such a system, in order to estimate a safety margin for crossing traffic participants. Tramway-track detection is a challenging task due to the urban environment with clutter, sharp curves and occlusions of the track. In this paper, we present a novel and generic system to detect the tramway track in advance of the tram position. The system incorporates an inverse perspective mapping and a-priori geometry knowledge of the rails to find possible track segments. The contribution of this paper involves the creation of a new track reconstruction algorithm which is based on graph theory. To this end, we define track segments as vertices in a graph, in which edges represent feasible connections. This graph is then converted to a max-cost arborescence graph, and the best path is selected according to its location and additional temporal information based on a maximum a-posteriori estimate. The proposed system clearly outperforms a railway-track detector. Furthermore, the system performance is validated on 3,600 manually annotated frames. The obtained results are promising, where straight tracks are found in more than 90% of the images and complete curves are still detected in 35% of the cases.
The Geoinformatica free and open source software stack
NASA Astrophysics Data System (ADS)
Jolma, A.
2012-04-01
The Geoinformatica free and open source software (FOSS) stack is based mainly on three established FOSS components, namely GDAL, GTK+, and Perl. GDAL provides access to a very large selection of geospatial data formats and data sources, a generic geospatial data model, and a large collection of geospatial analytical and processing functionality. GTK+ and the Cairo graphics library provide generic graphics and graphical user interface capabilities. Perl is a programming language, for which there is a very large set of FOSS modules for a wide range of purposes and which can be used as an integrative tool for building applications. In the Geoinformatica stack, data storages such as FOSS RDBMS PostgreSQL with its geospatial extension PostGIS can be used below the three above mentioned components. The top layer of Geoinformatica consists of a C library and several Perl modules. The C library comprises a general purpose raster algebra library, hydrological terrain analysis functions, and visualization code. The Perl modules define a generic visualized geospatial data layer and subclasses for raster and vector data and graphs. The hydrological terrain functions are already rather old and they suffer for example from the requirement of in-memory rasters. Newer research conducted using the platform include basic geospatial simulation modeling, visualization of ecological data, linking with a Bayesian network engine for spatial risk assessment in coastal areas, and developing standards-based distributed water resources information systems in Internet. The Geoinformatica stack constitutes a platform for geospatial research, which is targeted towards custom analytical tools, prototyping and linking with external libraries. Writing custom analytical tools is supported by the Perl language and the large collection of tools that are available especially in GDAL and Perl modules. Prototyping is supported by the GTK+ library, the GUI tools, and the support for object-oriented programming in Perl. New feature types, geospatial layer classes, and tools as extensions with specific features can be defined, used, and studied. Linking with external libraries is possible using the Perl foreign function interface tools or with generic tools such as Swig. We are interested in implementing and testing linking Geoinformatica with existing or new more specific hydrological FOSS.
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
NASA Astrophysics Data System (ADS)
Christian, Wolfgang; Belloni, Mario
2013-04-01
We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.
DiversePathsJ: diverse shortest paths for bioimage analysis.
Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael
2018-02-01
We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
The unassigned distance geometry problem
Duxbury, P. M.; Granlund, L.; Gujarathi, S. R.; ...
2015-11-19
Studies of distance geometry problems (DGP) have focused on cases where the vertices at the ends of all or most of the given distances are known or assigned, which we call assigned distance geometry problems (aDGPs). In this contribution we consider the unassigned distance geometry problem (uDGP) where the vertices associated with a given distance are unknown, so the graph structure has to be discovered. uDGPs arises when attempting to find the atomic structure of molecules and nanoparticles using X-ray or neutron diffraction data from non-crystalline materials. Rigidity theory provides a useful foundation for both aDGPs and uDGPs, though itmore » is restricted to generic realizations of graphs, and key results are summarized. Conditions for unique realization are discussed for aDGP and uDGP cases, build-up algorithms for both cases are described and experimental results for uDGP are presented.« less
Hegarty, Peter; Lemieux, Anthony F; McQueen, Grant
2010-03-01
Graphs seem to connote facts more than words or tables do. Consequently, they seem unlikely places to spot implicit sexism at work. Yet, in 6 studies (N = 741), women and men constructed (Study 1) and recalled (Study 2) gender difference graphs with men's data first, and graphed powerful groups (Study 3) and individuals (Study 4) ahead of weaker ones. Participants who interpreted graph order as evidence of author "bias" inferred that the author graphed his or her own gender group first (Study 5). Women's, but not men's, preferences to graph men first were mitigated when participants graphed a difference between themselves and an opposite-sex friend prior to graphing gender differences (Study 6). Graph production and comprehension are affected by beliefs and suppositions about the groups represented in graphs to a greater degree than cognitive models of graph comprehension or realist models of scientific thinking have yet acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
System analysis through bond graph modeling
NASA Astrophysics Data System (ADS)
McBride, Robert Thomas
2005-07-01
Modeling and simulation form an integral role in the engineering design process. An accurate mathematical description of a system provides the design engineer the flexibility to perform trade studies quickly and accurately to expedite the design process. Most often, the mathematical model of the system contains components of different engineering disciplines. A modeling methodology that can handle these types of systems might be used in an indirect fashion to extract added information from the model. This research examines the ability of a modeling methodology to provide added insight into system analysis and design. The modeling methodology used is bond graph modeling. An investigation into the creation of a bond graph model using the Lagrangian of the system is provided. Upon creation of the bond graph, system analysis is performed. To aid in the system analysis, an object-oriented approach to bond graph modeling is introduced. A framework is provided to simulate the bond graph directly. Through object-oriented simulation of a bond graph, the information contained within the bond graph can be exploited to create a measurement of system efficiency. A definition of system efficiency is given. This measurement of efficiency is used in the design of different controllers of varying architectures. Optimal control of a missile autopilot is discussed within the framework of the calculated system efficiency.
Paving the Way Towards Reactive Planar Spanner Construction in Wireless Networks
NASA Astrophysics Data System (ADS)
Frey, Hannes; Rührup, Stefan
A spanner is a subgraph of a given graph that supports the original graph's shortest path lengths up to a constant factor. Planar spanners and their distributed construction are of particular interest for geographic routing, which is an efficient localized routing scheme for wireless ad hoc and sensor networks. Planarity of the network graph is a key criterion for guaranteed delivery, while the spanner property supports efficiency in terms of path length. We consider the problem of reactive local spanner construction, where a node's local topology is determined on demand. Known message-efficient reactive planarization algorithms do not preserve the spanner property, while reactive spanner constructions with a low message overhead have not been described so far. We introduce the concept of direct planarization which may be an enabler of efficient reactive spanner construction. Given an edge, nodes check for all incident intersecting edges a certain geometric criterion and withdraw the edge if this criterion is not satisfied. We use this concept to derive a generic reactive topology control mechanism and consider two geometric criteria. Simulation results show that direct planarization increases the performance of localized geographic routing by providing shorter paths than existing reactive approaches.
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant
NASA Astrophysics Data System (ADS)
Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter
2017-09-01
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle
NASA Astrophysics Data System (ADS)
Gidden, Matthew J.
Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.
HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler
NASA Technical Reports Server (NTRS)
Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.
2012-01-01
HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.
Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang
2013-01-01
Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.
NASA Astrophysics Data System (ADS)
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
NASA Technical Reports Server (NTRS)
Mcardle, Jack G.; Esker, Barbara S.
1993-01-01
A one-third-scale model of a generic tailpipe offtake system for an advanced short takeoff, vertical landing (ASTOVL) aircraft was tested at the NASA Lewis Research Center Powered Lift Facility. The basic model consisted of a tailpipe with a center body to form an annulus simulating turbine outflow with no swirl; twin offtake ducts with elbows at the ends to turn the flow to a downward direction; flow control nozzles at the ends of the elbows; and a blind flange at the end of the tailpipe to simulate a closed cruise nozzle. The offtake duct-to-tailpipe diameter ratio was 0.74. Modifications of a generic nature were then made to this basic configuration to measure the effects of flow-path changes on the flow and pressure-loss characteristics. The modifications included adding rounded entrances at the forward edges of the offtake openings, blocking the tailpipe just aft the openings instead of at the cruise nozzle, changing the location of the openings along the tailpipe, removing the center body, and varying the Mach number (flow rate) over a wide range in the tailpipe ahead of the openings by changing the size of the flow control nozzles. The tests were made with unheated air at tailpipe-to-ambient pressure ratios from 1.4 to 5. Results are presented and compared with performance graphs, total-pressure contour plots, paint streak flow visualization photographs, and a flow-angle probe traverse at the offtake entrance.
From the physics of interacting polymers to optimizing routes on the London Underground
Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael
2013-01-01
Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198
From the physics of interacting polymers to optimizing routes on the London Underground.
Yeung, Chi Ho; Saad, David; Wong, K Y Michael
2013-08-20
Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.
Netgram: Visualizing Communities in Evolving Networks
Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.
2015-01-01
Real-world complex networks are dynamic in nature and change over time. The change is usually observed in the interactions within the network over time. Complex networks exhibit community like structures. A key feature of the dynamics of complex networks is the evolution of communities over time. Several methods have been proposed to detect and track the evolution of these groups over time. However, there is no generic tool which visualizes all the aspects of group evolution in dynamic networks including birth, death, splitting, merging, expansion, shrinkage and continuation of groups. In this paper, we propose Netgram: a tool for visualizing evolution of communities in time-evolving graphs. Netgram maintains evolution of communities over 2 consecutive time-stamps in tables which are used to create a query database using the sql outer-join operation. It uses a line-based visualization technique which adheres to certain design principles and aesthetic guidelines. Netgram uses a greedy solution to order the initial community information provided by the evolutionary clustering technique such that we have fewer line cross-overs in the visualization. This makes it easier to track the progress of individual communities in time evolving graphs. Netgram is a generic toolkit which can be used with any evolutionary community detection algorithm as illustrated in our experiments. We use Netgram for visualization of topic evolution in the NIPS conference over a period of 11 years and observe the emergence and merging of several disciplines in the field of information processing systems. PMID:26356538
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph C.; McLendon, William Clarence,
2013-01-01
Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report amore » preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.« less
Convergence of the Graph Allen-Cahn Scheme
NASA Astrophysics Data System (ADS)
Luo, Xiyang; Bertozzi, Andrea L.
2017-05-01
The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.
Hierarchical graphs for rule-based modeling of biochemical systems
2011-01-01
Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR) complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for specifying rule-based models, such as the BioNetGen language (BNGL). Thus, the proposed use of hierarchical graphs should promote clarity and better understanding of rule-based models. PMID:21288338
FUSE: a profit maximization approach for functional summarization of biological networks.
Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes; Yu, Hanry
2012-03-21
The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein interaction network (PPI) using graph theoretic analysis. Despite the recent progress, systems level analysis of PPIS remains a daunting task as it is challenging to make sense out of the deluge of high-dimensional interaction data. Specifically, techniques that automatically abstract and summarize PPIS at multiple resolutions to provide high level views of its functional landscape are still lacking. We present a novel data-driven and generic algorithm called FUSE (Functional Summary Generator) that generates functional maps of a PPI at different levels of organization, from broad process-process level interactions to in-depth complex-complex level interactions, through a pro t maximization approach that exploits Minimum Description Length (MDL) principle to maximize information gain of the summary graph while satisfying the level of detail constraint. We evaluate the performance of FUSE on several real-world PPIS. We also compare FUSE to state-of-the-art graph clustering methods with GO term enrichment by constructing the biological process landscape of the PPIS. Using AD network as our case study, we further demonstrate the ability of FUSE to quickly summarize the network and identify many different processes and complexes that regulate it. Finally, we study the higher-order connectivity of the human PPI. By simultaneously evaluating interaction and annotation data, FUSE abstracts higher-order interaction maps by reducing the details of the underlying PPI to form a functional summary graph of interconnected functional clusters. Our results demonstrate its effectiveness and superiority over state-of-the-art graph clustering methods with GO term enrichment.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Graph wavelet alignment kernels for drug virtual screening.
Smalter, Aaron; Huan, Jun; Lushington, Gerald
2009-06-01
In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.
Emerging hierarchies in dynamically adapting webs
NASA Astrophysics Data System (ADS)
Katifori, Eleni; Graewer, Johannes; Magnasco, Marcelo; Modes, Carl
Transport networks play a key role across four realms of eukaryotic life: slime molds, fungi, plants, and animals. In addition to the developmental algorithms that build them, many also employ adaptive strategies to respond to stimuli, damage, and other environmental changes. We model these adapting network architectures using a generic dynamical system on weighted graphs and find in simulation that these networks ultimately develop a hierarchical organization of the final weighted architecture accompanied by the formation of a system-spanning backbone. We quantify the hierarchical organization of the networks by developing an algorithm that decomposes the architecture to multiple scales and analyzes how the organization in each scale relates to that of the scale above and below it. The methodologies developed in this work are applicable to a wide range of systems including the slime mold physarum polycephalum, human microvasculature, and force chains in granular media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Supriya; Shankar, Mallikarjun
Critical Infrastructure systems(CIs) such as energy, water, transportation and communication are highly interconnected and mutually dependent in complex ways. Robust modeling of CIs interconnections is crucial to identify vulnerabilities in the CIs. We present here a national-scale Infrastructure Vulnerability Analysis System (IVAS) vision leveraging Se- mantic Big Data (SBD) tools, Big Data, and Geographical Information Systems (GIS) tools. We survey existing ap- proaches on vulnerability analysis of critical infrastructures and discuss relevant systems and tools aligned with our vi- sion. Next, we present a generic system architecture and discuss challenges including: (1) Constructing and manag- ing a CI network-of-networks graph,more » (2) Performing analytic operations at scale, and (3) Interactive visualization of ana- lytic output to generate meaningful insights. We argue that this architecture acts as a baseline to realize a national-scale network based vulnerability analysis system.« less
Mining and Indexing Graph Databases
ERIC Educational Resources Information Center
Yuan, Dayu
2013-01-01
Graphs are widely used to model structures and relationships of objects in various scientific and commercial fields. Chemical molecules, proteins, malware system-call dependencies and three-dimensional mechanical parts are all modeled as graphs. In this dissertation, we propose to mine and index those graph data to enable fast and scalable search.…
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Bipartite graphs as models of population structures in evolutionary multiplayer games.
Peña, Jorge; Rochat, Yannick
2012-01-01
By combining evolutionary game theory and graph theory, "games on graphs" study the evolutionary dynamics of frequency-dependent selection in population structures modeled as geographical or social networks. Networks are usually represented by means of unipartite graphs, and social interactions by two-person games such as the famous prisoner's dilemma. Unipartite graphs have also been used for modeling interactions going beyond pairwise interactions. In this paper, we argue that bipartite graphs are a better alternative to unipartite graphs for describing population structures in evolutionary multiplayer games. To illustrate this point, we make use of bipartite graphs to investigate, by means of computer simulations, the evolution of cooperation under the conventional and the distributed N-person prisoner's dilemma. We show that several implicit assumptions arising from the standard approach based on unipartite graphs (such as the definition of replacement neighborhoods, the intertwining of individual and group diversity, and the large overlap of interaction neighborhoods) can have a large impact on the resulting evolutionary dynamics. Our work provides a clear example of the importance of construction procedures in games on graphs, of the suitability of bigraphs and hypergraphs for computational modeling, and of the importance of concepts from social network analysis such as centrality, centralization and bipartite clustering for the understanding of dynamical processes occurring on networked population structures.
Intelligent model-based diagnostics for vehicle health management
NASA Astrophysics Data System (ADS)
Luo, Jianhui; Tu, Fang; Azam, Mohammad S.; Pattipati, Krishna R.; Willett, Peter K.; Qiao, Liu; Kawamoto, Masayuki
2003-08-01
The recent advances in sensor technology, remote communication and computational capabilities, and standardized hardware/software interfaces are creating a dramatic shift in the way the health of vehicles is monitored and managed. These advances facilitate remote monitoring, diagnosis and condition-based maintenance of automotive systems. With the increased sophistication of electronic control systems in vehicles, there is a concomitant increased difficulty in the identification of the malfunction phenomena. Consequently, the current rule-based diagnostic systems are difficult to develop, validate and maintain. New intelligent model-based diagnostic methodologies that exploit the advances in sensor, telecommunications, computing and software technologies are needed. In this paper, we will investigate hybrid model-based techniques that seamlessly employ quantitative (analytical) models and graph-based dependency models for intelligent diagnosis. Automotive engineers have found quantitative simulation (e.g. MATLAB/SIMULINK) to be a vital tool in the development of advanced control systems. The hybrid method exploits this capability to improve the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on rule-based methods, enables remote diagnosis, and responds to the challenges of increased system complexity. The solution is generic and has the potential for application in a wide range of systems.
On Edge Exchangeable Random Graphs
NASA Astrophysics Data System (ADS)
Janson, Svante
2017-06-01
We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).
Statistically significant relational data mining :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less
Dynamic graph of an oxy-fuel combustion system using autocatalytic set model
NASA Astrophysics Data System (ADS)
Harish, Noor Ainy; Bakar, Sumarni Abu
2017-08-01
Evaporation process is one of the main processes besides combustion process in an oxy-combustion boiler system. An Autocatalytic Set (ASC) Model has successfully applied in developing graphical representation of the chemical reactions that occurs in the evaporation process in the system. Seventeen variables identified in the process are represented as nodes and the catalytic relationships are represented as edges in the graph. In addition, in this paper graph dynamics of ACS is further investigated. By using Dynamic Autocatalytic Set Graph Algorithm (DAGA), the adjacency matrix for each of the graphs and its relations to Perron-Frobenius Theorem is investigated. The dynamic graph obtained is further investigated where the connection of the graph to fuzzy graph Type 1 is established.
The graph neural network model.
Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele
2009-01-01
Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.
Graph theoretical model of a sensorimotor connectome in zebrafish.
Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan
2012-01-01
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
Topology of molecular interaction networks.
Winterbach, Wynand; Van Mieghem, Piet; Reinders, Marcel; Wang, Huijuan; de Ridder, Dick
2013-09-16
Molecular interactions are often represented as network models which have become the common language of many areas of biology. Graphs serve as convenient mathematical representations of network models and have themselves become objects of study. Their topology has been intensively researched over the last decade after evidence was found that they share underlying design principles with many other types of networks.Initial studies suggested that molecular interaction network topology is related to biological function and evolution. However, further whole-network analyses did not lead to a unified view on what this relation may look like, with conclusions highly dependent on the type of molecular interactions considered and the metrics used to study them. It is unclear whether global network topology drives function, as suggested by some researchers, or whether it is simply a byproduct of evolution or even an artefact of representing complex molecular interaction networks as graphs.Nevertheless, network biology has progressed significantly over the last years. We review the literature, focusing on two major developments. First, realizing that molecular interaction networks can be naturally decomposed into subsystems (such as modules and pathways), topology is increasingly studied locally rather than globally. Second, there is a move from a descriptive approach to a predictive one: rather than correlating biological network topology to generic properties such as robustness, it is used to predict specific functions or phenotypes.Taken together, this change in focus from globally descriptive to locally predictive points to new avenues of research. In particular, multi-scale approaches are developments promising to drive the study of molecular interaction networks further.
Topology of molecular interaction networks
2013-01-01
Molecular interactions are often represented as network models which have become the common language of many areas of biology. Graphs serve as convenient mathematical representations of network models and have themselves become objects of study. Their topology has been intensively researched over the last decade after evidence was found that they share underlying design principles with many other types of networks. Initial studies suggested that molecular interaction network topology is related to biological function and evolution. However, further whole-network analyses did not lead to a unified view on what this relation may look like, with conclusions highly dependent on the type of molecular interactions considered and the metrics used to study them. It is unclear whether global network topology drives function, as suggested by some researchers, or whether it is simply a byproduct of evolution or even an artefact of representing complex molecular interaction networks as graphs. Nevertheless, network biology has progressed significantly over the last years. We review the literature, focusing on two major developments. First, realizing that molecular interaction networks can be naturally decomposed into subsystems (such as modules and pathways), topology is increasingly studied locally rather than globally. Second, there is a move from a descriptive approach to a predictive one: rather than correlating biological network topology to generic properties such as robustness, it is used to predict specific functions or phenotypes. Taken together, this change in focus from globally descriptive to locally predictive points to new avenues of research. In particular, multi-scale approaches are developments promising to drive the study of molecular interaction networks further. PMID:24041013
Multi-Level Anomaly Detection on Time-Varying Graph Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A; Collins, John P; Ferragut, Erik M
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1993-01-01
Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
GraQL: A Query Language for High-Performance Attributed Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Castellana, Vito G.; Morari, Alessandro
Graph databases have gained increasing interest in the last few years due to the emergence of data sources which are not easily analyzable in traditional relational models or for which a graph data model is the natural representation. In order to understand the design and implementation choices for an attributed graph database backend and query language, we have started to design our infrastructure for attributed graph databases. In this paper, we describe the design considerations of our in-memory attributed graph database system with a particular focus on the data definition and query language components.
The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
Two-year-olds use the generic/non-generic distinction to guide their inferences about novel kinds
Graham, Susan A.; Nayer, Samantha L.; Gelman, Susan A.
2011-01-01
These studies investigated 24- and 30-month-olds’ sensitivity to generic versus nongeneric language when acquiring knowledge about novel kinds. Toddlers were administered an inductive inference task, during which they heard a generic noun-phrase (e.g., “Blicks drink milk”) or a non-generic noun-phrase (e.g., “This blick drinks milk”) paired with an action (e.g., drinking) modeled on an object. They were then provided with the model and a non-model exemplar and asked to imitate the action. After hearing non-generic phrases, 30-month-olds, but not 24-month-olds, imitated more often with the model than with the non-model exemplar. In contrast, after hearing generic phrases, 30-month-olds imitated equally often with both exemplars. These results suggest that 30-month-olds use the generic/non-generic distinction to guide their inferences about novel kinds. PMID:21410928
ERIC Educational Resources Information Center
Tyner, Bryan C.; Fienup, Daniel M.
2015-01-01
Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance.…
F-RAG: Generating Atomic Coordinates from RNA Graphs by Fragment Assembly.
Jain, Swati; Schlick, Tamar
2017-11-24
Coarse-grained models represent attractive approaches to analyze and simulate ribonucleic acid (RNA) molecules, for example, for structure prediction and design, as they simplify the RNA structure to reduce the conformational search space. Our structure prediction protocol RAGTOP (RNA-As-Graphs Topology Prediction) represents RNA structures as tree graphs and samples graph topologies to produce candidate graphs. However, for a more detailed study and analysis, construction of atomic from coarse-grained models is required. Here we present our graph-based fragment assembly algorithm (F-RAG) to convert candidate three-dimensional (3D) tree graph models, produced by RAGTOP into atomic structures. We use our related RAG-3D utilities to partition graphs into subgraphs and search for structurally similar atomic fragments in a data set of RNA 3D structures. The fragments are edited and superimposed using common residues, full atomic models are scored using RAGTOP's knowledge-based potential, and geometries of top scoring models is optimized. To evaluate our models, we assess all-atom RMSDs and Interaction Network Fidelity (a measure of residue interactions) with respect to experimentally solved structures and compare our results to other fragment assembly programs. For a set of 50 RNA structures, we obtain atomic models with reasonable geometries and interactions, particularly good for RNAs containing junctions. Additional improvements to our protocol and databases are outlined. These results provide a good foundation for further work on RNA structure prediction and design applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
The 1/ N Expansion of Tensor Models with Two Symmetric Tensors
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2018-06-01
It is well known that tensor models for a tensor with no symmetry admit a 1/ N expansion dominated by melonic graphs. This result relies crucially on identifying jackets, which are globally defined ribbon graphs embedded in the tensor graph. In contrast, no result of this kind has so far been established for symmetric tensors because global jackets do not exist. In this paper we introduce a new approach to the 1/ N expansion in tensor models adapted to symmetric tensors. In particular we do not use any global structure like the jackets. We prove that, for any rank D, a tensor model with two symmetric tensors and interactions the complete graph K D+1 admits a 1/ N expansion dominated by melonic graphs.
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...
2016-01-01
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Large-scale DCMs for resting-state fMRI.
Razi, Adeel; Seghier, Mohamed L; Zhou, Yuan; McColgan, Peter; Zeidman, Peter; Park, Hae-Jeong; Sporns, Olaf; Rees, Geraint; Friston, Karl J
2017-01-01
This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity . This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI). We use spectral dynamic causal modeling (DCM) to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of to discover the most likely sparse graph (or model) from a parent (e.g., fully connected) graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity) graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM-with functional connectivity priors-is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1995-01-01
Intelligent systems require software incorporating probabilistic reasoning, and often times learning. Networks provide a framework and methodology for creating this kind of software. This paper introduces network models based on chain graphs with deterministic nodes. Chain graphs are defined as a hierarchical combination of Bayesian and Markov networks. To model learning, plates on chain graphs are introduced to model independent samples. The paper concludes by discussing various operations that can be performed on chain graphs with plates as a simplification process or to generate learning algorithms.
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.
2017-12-12
smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found
Bipartite Graphs as Models of Population Structures in Evolutionary Multiplayer Games
Peña, Jorge; Rochat, Yannick
2012-01-01
By combining evolutionary game theory and graph theory, “games on graphs” study the evolutionary dynamics of frequency-dependent selection in population structures modeled as geographical or social networks. Networks are usually represented by means of unipartite graphs, and social interactions by two-person games such as the famous prisoner’s dilemma. Unipartite graphs have also been used for modeling interactions going beyond pairwise interactions. In this paper, we argue that bipartite graphs are a better alternative to unipartite graphs for describing population structures in evolutionary multiplayer games. To illustrate this point, we make use of bipartite graphs to investigate, by means of computer simulations, the evolution of cooperation under the conventional and the distributed N-person prisoner’s dilemma. We show that several implicit assumptions arising from the standard approach based on unipartite graphs (such as the definition of replacement neighborhoods, the intertwining of individual and group diversity, and the large overlap of interaction neighborhoods) can have a large impact on the resulting evolutionary dynamics. Our work provides a clear example of the importance of construction procedures in games on graphs, of the suitability of bigraphs and hypergraphs for computational modeling, and of the importance of concepts from social network analysis such as centrality, centralization and bipartite clustering for the understanding of dynamical processes occurring on networked population structures. PMID:22970237
Applications of graph theory in protein structure identification
2011-01-01
There is a growing interest in the identification of proteins on the proteome wide scale. Among different kinds of protein structure identification methods, graph-theoretic methods are very sharp ones. Due to their lower costs, higher effectiveness and many other advantages, they have drawn more and more researchers’ attention nowadays. Specifically, graph-theoretic methods have been widely used in homology identification, side-chain cluster identification, peptide sequencing and so on. This paper reviews several methods in solving protein structure identification problems using graph theory. We mainly introduce classical methods and mathematical models including homology modeling based on clique finding, identification of side-chain clusters in protein structures upon graph spectrum, and de novo peptide sequencing via tandem mass spectrometry using the spectrum graph model. In addition, concluding remarks and future priorities of each method are given. PMID:22165974
Automatic determination of fault effects on aircraft functionality
NASA Technical Reports Server (NTRS)
Feyock, Stefan
1989-01-01
The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.
Mathematical modeling of the malignancy of cancer using graph evolution.
Gunduz-Demir, Cigdem
2007-10-01
We report a novel computational method based on graph evolution process to model the malignancy of brain cancer called glioma. In this work, we analyze the phases that a graph passes through during its evolution and demonstrate strong relation between the malignancy of cancer and the phase of its graph. From the photomicrographs of tissues, which are diagnosed as normal, low-grade cancerous and high-grade cancerous, we construct cell-graphs based on the locations of cells; we probabilistically generate an edge between every pair of cells depending on the Euclidean distance between them. For a cell-graph, we extract connectivity information including the properties of its connected components in order to analyze the phase of the cell-graph. Working with brain tissue samples surgically removed from 12 patients, we demonstrate that cell-graphs generated for different tissue types evolve differently and that they exhibit different phase properties, which distinguish a tissue type from another.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Graph theory as a proxy for spatially explicit population models in conservation planning.
Minor, Emily S; Urban, Dean L
2007-09-01
Spatially explicit population models (SEPMs) are often considered the best way to predict and manage species distributions in spatially heterogeneous landscapes. However, they are computationally intensive and require extensive knowledge of species' biology and behavior, limiting their application in many cases. An alternative to SEPMs is graph theory, which has minimal data requirements and efficient algorithms. Although only recently introduced to landscape ecology, graph theory is well suited to ecological applications concerned with connectivity or movement. This paper compares the performance of graph theory to a SEPM in selecting important habitat patches for Wood Thrush (Hylocichla mustelina) conservation. We use both models to identify habitat patches that act as population sources and persistent patches and also use graph theory to identify patches that act as stepping stones for dispersal. Correlations of patch rankings were very high between the two models. In addition, graph theory offers the ability to identify patches that are very important to habitat connectivity and thus long-term population persistence across the landscape. We show that graph theory makes very similar predictions in most cases and in other cases offers insight not available from the SEPM, and we conclude that graph theory is a suitable and possibly preferable alternative to SEPMs for species conservation in heterogeneous landscapes.
Metric learning with spectral graph convolutions on brain connectivity networks.
Ktena, Sofia Ira; Parisot, Sarah; Ferrante, Enzo; Rajchl, Martin; Lee, Matthew; Glocker, Ben; Rueckert, Daniel
2018-04-01
Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods. Copyright © 2017 Elsevier Inc. All rights reserved.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
Simple graph models of information spread in finite populations
Voorhees, Burton; Ryder, Bergerud
2015-01-01
We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661
Automated Modeling and Simulation Using the Bond Graph Method for the Aerospace Industry
NASA Technical Reports Server (NTRS)
Granda, Jose J.; Montgomery, Raymond C.
2003-01-01
Bond graph modeling was originally developed in the late 1950s by the late Prof. Henry M. Paynter of M.I.T. Prof. Paynter acted well before his time as the main advantage of his creation, other than the modeling insight that it provides and the ability of effectively dealing with Mechatronics, came into fruition only with the recent advent of modern computer technology and the tools derived as a result of it, including symbolic manipulation, MATLAB, and SIMULINK and the Computer Aided Modeling Program (CAMPG). Thus, only recently have these tools been available allowing one to fully utilize the advantages that the bond graph method has to offer. The purpose of this paper is to help fill the knowledge void concerning its use of bond graphs in the aerospace industry. The paper first presents simple examples to serve as a tutorial on bond graphs for those not familiar with the technique. The reader is given the basic understanding needed to appreciate the applications that follow. After that, several aerospace applications are developed such as modeling of an arresting system for aircraft carrier landings, suspension models used for landing gears and multibody dynamics. The paper presents also an update on NASA's progress in modeling the International Space Station (ISS) using bond graph techniques, and an advanced actuation system utilizing shape memory alloys. The later covers the Mechatronics advantages of the bond graph method, applications that simultaneously involves mechanical, hydraulic, thermal, and electrical subsystem modeling.
An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional
NASA Astrophysics Data System (ADS)
van Gennip, Yves
2018-06-01
We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.
Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin
2018-01-01
We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.
Harnessing the Bethe free energy†
Bapst, Victor
2016-01-01
ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178
Tyner, Bryan C; Fienup, Daniel M
2015-09-01
Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed. © Society for the Experimental Analysis of Behavior.
Collaborative mining and transfer learning for relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Eslami, Mohammed
2015-06-01
Many of the real-world problems, - including human knowledge, communication, biological, and cyber network analysis, - deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.
Space Generic Open Avionics Architecture (SGOAA) reference model technical guide
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Graph Coloring Used to Model Traffic Lights.
ERIC Educational Resources Information Center
Williams, John
1992-01-01
Two scheduling problems, one involving setting up an examination schedule and the other describing traffic light problems, are modeled as colorings of graphs consisting of a set of vertices and edges. The chromatic number, the least number of colors necessary for coloring a graph, is employed in the solutions. (MDH)
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Algebraic approach to small-world network models
NASA Astrophysics Data System (ADS)
Rudolph-Lilith, Michelle; Muller, Lyle E.
2014-01-01
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Semantic graphs and associative memories
NASA Astrophysics Data System (ADS)
Pomi, Andrés; Mizraji, Eduardo
2004-12-01
Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.
Adjusting protein graphs based on graph entropy.
Peng, Sheng-Lung; Tsay, Yu-Wei
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid.
Adjusting protein graphs based on graph entropy
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid. PMID:25474347
Overlapping community detection based on link graph using distance dynamics
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Jing; Cai, Li-Jun
2018-01-01
The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.
High-Performance Data Analytics Beyond the Relational and Graph Data Models with GEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Minutoli, Marco; Bhatt, Shreyansh
Graphs represent an increasingly popular data model for data-analytics, since they can naturally represent relationships and interactions between entities. Relational databases and their pure table-based data model are not well suitable to store and process sparse data. Consequently, graph databases have gained interest in the last few years and the Resource Description Framework (RDF) became the standard data model for graph data. Nevertheless, while RDF is well suited to analyze the relationships between the entities, it is not efficient in representing their attributes and properties. In this work we propose the adoption of a new hybrid data model, based onmore » attributed graphs, that aims at overcoming the limitations of the pure relational and graph data models. We present how we have re-designed the GEMS data-analytics framework to fully take advantage of the proposed hybrid data model. To improve analysts productivity, in addition to a C++ API for applications development, we adopt GraQL as input query language. We validate our approach implementing a set of queries on net-flow data and we compare our framework performance against Neo4j. Experimental results show significant performance improvement over Neo4j, up to several orders of magnitude when increasing the size of the input data.« less
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1994-01-01
This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
A new intrusion prevention model using planning knowledge graph
NASA Astrophysics Data System (ADS)
Cai, Zengyu; Feng, Yuan; Liu, Shuru; Gan, Yong
2013-03-01
Intelligent plan is a very important research in artificial intelligence, which has applied in network security. This paper proposes a new intrusion prevention model base on planning knowledge graph and discuses the system architecture and characteristics of this model. The Intrusion Prevention based on plan knowledge graph is completed by plan recognition based on planning knowledge graph, and the Intrusion response strategies and actions are completed by the hierarchical task network (HTN) planner in this paper. Intrusion prevention system has the advantages of intelligent planning, which has the advantage of the knowledge-sharing, the response focused, learning autonomy and protective ability.
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
QSPR modeling: graph connectivity indices versus line graph connectivity indices
Basak; Nikolic; Trinajstic; Amic; Beslo
2000-07-01
Five QSPR models of alkanes were reinvestigated. Properties considered were molecular surface-dependent properties (boiling points and gas chromatographic retention indices) and molecular volume-dependent properties (molar volumes and molar refractions). The vertex- and edge-connectivity indices were used as structural parameters. In each studied case we computed connectivity indices of alkane trees and alkane line graphs and searched for the optimum exponent. Models based on indices with an optimum exponent and on the standard value of the exponent were compared. Thus, for each property we generated six QSPR models (four for alkane trees and two for the corresponding line graphs). In all studied cases QSPR models based on connectivity indices with optimum exponents have better statistical characteristics than the models based on connectivity indices with the standard value of the exponent. The comparison between models based on vertex- and edge-connectivity indices gave in two cases (molar volumes and molar refractions) better models based on edge-connectivity indices and in three cases (boiling points for octanes and nonanes and gas chromatographic retention indices) better models based on vertex-connectivity indices. Thus, it appears that the edge-connectivity index is more appropriate to be used in the structure-molecular volume properties modeling and the vertex-connectivity index in the structure-molecular surface properties modeling. The use of line graphs did not improve the predictive power of the connectivity indices. Only in one case (boiling points of nonanes) a better model was obtained with the use of line graphs.
Interpreting Unfamiliar Graphs: A Generative, Activity Theoretic Model
ERIC Educational Resources Information Center
Roth, Wolff-Michael; Lee, Yew Jin
2004-01-01
Research on graphing presents its results as if knowing and understanding were something stored in peoples' minds independent of the situation that they find themselves in. Thus, there are no models that situate interview responses to graphing tasks. How, then, we question, are the interview texts produced? How do respondents begin and end…
Generic domain models in software engineering
NASA Technical Reports Server (NTRS)
Maiden, Neil
1992-01-01
This paper outlines three research directions related to domain-specific software development: (1) reuse of generic models for domain-specific software development; (2) empirical evidence to determine these generic models, namely elicitation of mental knowledge schema possessed by expert software developers; and (3) exploitation of generic domain models to assist modelling of specific applications. It focuses on knowledge acquisition for domain-specific software development, with emphasis on tool support for the most important phases of software development.
Advanced Cyber Attack Modeling Analysis and Visualization
2010-03-01
Graph Analysis Network Web Logs Netflow Data TCP Dump Data System Logs Detect Protect Security Management What-If Figure 8. TVA attack graphs for...Clustered Graphs,” in Proceedings of the Symposium on Graph Drawing, September 1996. [25] K. Lakkaraju, W. Yurcik, A. Lee, “NVisionIP: NetFlow
Survey of Approaches to Generate Realistic Synthetic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Powers, Sarah S
A graph is a flexible data structure that can represent relationships between entities. As with other data analysis tasks, the use of realistic graphs is critical to obtaining valid research results. Unfortunately, using the actual ("real-world") graphs for research and new algorithm development is difficult due to the presence of sensitive information in the data or due to the scale of data. This results in practitioners developing algorithms and systems that employ synthetic graphs instead of real-world graphs. Generating realistic synthetic graphs that provide reliable statistical confidence to algorithmic analysis and system evaluation involves addressing technical hurdles in a broadmore » set of areas. This report surveys the state of the art in approaches to generate realistic graphs that are derived from fitted graph models on real-world graphs.« less
Bakal, Gokhan; Talari, Preetham; Kakani, Elijah V; Kavuluru, Ramakanth
2018-06-01
Identifying new potential treatment options for medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Likewise, identifying different causal relations between biomedical entities is also critical to understand biomedical processes. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach. To build high accuracy supervised predictive models to predict previously unknown treatment and causative relations between biomedical entities based only on semantic graph pattern features extracted from biomedical knowledge graphs. We used 7000 treats and 2918 causes hand-curated relations from the UMLS Metathesaurus to train and test our models. Our graph pattern features are extracted from simple paths connecting biomedical entities in the SemMedDB graph (based on the well-known SemMedDB database made available by the U.S. National Library of Medicine). Using these graph patterns connecting biomedical entities as features of logistic regression and decision tree models, we computed mean performance measures (precision, recall, F-score) over 100 distinct 80-20% train-test splits of the datasets. For all experiments, we used a positive:negative class imbalance of 1:10 in the test set to model relatively more realistic scenarios. Our models predict treats and causes relations with high F-scores of 99% and 90% respectively. Logistic regression model coefficients also help us identify highly discriminative patterns that have an intuitive interpretation. We are also able to predict some new plausible relations based on false positives that our models scored highly based on our collaborations with two physician co-authors. Finally, our decision tree models are able to retrieve over 50% of treatment relations from a recently created external dataset. We employed semantic graph patterns connecting pairs of candidate biomedical entities in a knowledge graph as features to predict treatment/causative relations between them. We provide what we believe is the first evidence in direct prediction of biomedical relations based on graph features. Our work complements lexical pattern based approaches in that the graph patterns can be used as additional features for weakly supervised relation prediction. Copyright © 2018 Elsevier Inc. All rights reserved.
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
Resource utilization model for the algorithm to architecture mapping model
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Patel, Rakesh R.
1993-01-01
The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.
Safaei, Soroush; Blanco, Pablo J; Müller, Lucas O; Hellevik, Leif R; Hunter, Peter J
2018-01-01
We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data.
Shrank, William H; Cadarette, Suzanne M; Cox, Emily; Fischer, Michael A; Mehta, Jyotsna; Brookhart, Alan M; Avorn, Jerry; Choudhry, Niteesh K
2009-03-01
Insurers and policymakers strive to stimulate more cost-effective prescribing and, increasingly, are educating beneficiaries about generics. To evaluate the relationship between patient beliefs and communication about generic drugs and actual drug use. We performed a national mailed survey of a random sample of 2500 commercially-insured adults. Patient responses were linked to pharmacy claims data to assess actual generic medication use. We used factor analysis to develop 5 multi-item scales from patient survey responses that measured: (1) general preferences for generics, (2) generic safety/effectiveness, (3) generic cost/value, (4) comfort with generic substitution, and (5) communication with providers about generics. The relationship between each scale and the proportion of prescriptions filled for generics was assessed using linear regression, controlling for demographic, health, and insurance characteristics. Separate models were created for each scale and then all 5 scales were included simultaneously in a fully-adjusted model. The usable response rate was 48%. When evaluated independently, a 1 SD increase in each of the 5 scales was associated with a 3.1% to 6.3% increase in generic drug use (P < 0.05 for each). In the fully adjusted model, only 2 scales were significantly associated with generic drug use: comfort with generic substitution (P = 0.021) and communication with providers about generic drugs (P = 0.012). Generic drug use is most closely associated with the 2 actionable items we evaluated: communication with providers about generics and comfort with generic substitution. Educational campaigns that focus on these 2 domains may be most effective at influencing generic drug use.
Bond graph modelling of multibody dynamics and its symbolic scheme
NASA Astrophysics Data System (ADS)
Kawase, Takehiko; Yoshimura, Hiroaki
A bond graph method of modeling multibody dynamics is demonstrated. Specifically, a symbolic generation scheme which fully utilizes the bond graph information is presented. It is also demonstrated that structural understanding and representation in bond graph theory is quite powerful for the modeling of such large scale systems, and that the nonenergic multiport of junction structure, which is a multiport expression of the system structure, plays an important role, as first suggested by Paynter. The principal part of the proposed symbolic scheme, that is, the elimination of excess variables, is done through tearing and interconnection in the sense of Kron using newly defined causal and causal coefficient arrays.
Groupies in multitype random graphs.
Shang, Yilun
2016-01-01
A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach
NASA Technical Reports Server (NTRS)
Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.
2006-01-01
A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagberg, Aric; Swart, Pieter; S Chult, Daniel
NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility mades NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distributionmore » and many more. NetworkX can read and write various graph formats for eash exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdoes-Renyi, Small World, and Barabasi-Albert models, are included. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks.« less
Towards syntactic characterizations of approximation schemes via predicate and graph decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, H.B. III; Stearns, R.E.; Jacob, R.
1998-12-01
The authors present a simple extensible theoretical framework for devising polynomial time approximation schemes for problems represented using natural syntactic (algebraic) specifications endowed with natural graph theoretic restrictions on input instances. Direct application of the technique yields polynomial time approximation schemes for all the problems studied in [LT80, NC88, KM96, Ba83, DTS93, HM+94a, HM+94] as well as the first known approximation schemes for a number of additional combinatorial problems. One notable aspect of the work is that it provides insights into the structure of the syntactic specifications and the corresponding algorithms considered in [KM96, HM+94]. The understanding allows them tomore » extend the class of syntactic specifications for which generic approximation schemes can be developed. The results can be shown to be tight in many cases, i.e. natural extensions of the specifications can be shown to yield non-approximable problems. The results provide a non-trivial characterization of a class of problems having a PTAS and extend the earlier work on this topic by [KM96, HM+94].« less
Corona graphs as a model of small-world networks
NASA Astrophysics Data System (ADS)
Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi
2015-11-01
We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
Measuring Graph Comprehension, Critique, and Construction in Science
ERIC Educational Resources Information Center
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-01-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed…
Distributed Sensing and Processing: A Graphical Model Approach
2005-11-30
that Ramanujan graph toplogies maximize the convergence rate of distributed detection consensus algorithms, improving over three orders of...small world type network designs. 14. SUBJECT TERMS Ramanujan graphs, sensor network topology, sensor network...that Ramanujan graphs, for which there are explicit algebraic constructions, have large eigenratios, converging much faster than structured graphs
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
Learning a Health Knowledge Graph from Electronic Medical Records.
Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David
2017-07-20
Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).
Collaborative mining of graph patterns from multiple sources
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Colonna-Romanoa, John
2016-05-01
Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.
A distributed query execution engine of big attributed graphs.
Batarfi, Omar; Elshawi, Radwa; Fayoumi, Ayman; Barnawi, Ahmed; Sakr, Sherif
2016-01-01
A graph is a popular data model that has become pervasively used for modeling structural relationships between objects. In practice, in many real-world graphs, the graph vertices and edges need to be associated with descriptive attributes. Such type of graphs are referred to as attributed graphs. G-SPARQL has been proposed as an expressive language, with a centralized execution engine, for querying attributed graphs. G-SPARQL supports various types of graph querying operations including reachability, pattern matching and shortest path where any G-SPARQL query may include value-based predicates on the descriptive information (attributes) of the graph edges/vertices in addition to the structural predicates. In general, a main limitation of centralized systems is that their vertical scalability is always restricted by the physical limits of computer systems. This article describes the design, implementation in addition to the performance evaluation of DG-SPARQL, a distributed, hybrid and adaptive parallel execution engine of G-SPARQL queries. In this engine, the topology of the graph is distributed over the main memory of the underlying nodes while the graph data are maintained in a relational store which is replicated on the disk of each of the underlying nodes. DG-SPARQL evaluates parts of the query plan via SQL queries which are pushed to the underlying relational stores while other parts of the query plan, as necessary, are evaluated via indexless memory-based graph traversal algorithms. Our experimental evaluation shows the efficiency and the scalability of DG-SPARQL on querying massive attributed graph datasets in addition to its ability to outperform the performance of Apache Giraph, a popular distributed graph processing system, by orders of magnitudes.
Retina verification system based on biometric graph matching.
Lajevardi, Seyed Mehdi; Arakala, Arathi; Davis, Stephen A; Horadam, Kathy J
2013-09-01
This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude.
A model of language inflection graphs
NASA Astrophysics Data System (ADS)
Fukś, Henryk; Farzad, Babak; Cao, Yi
2014-01-01
Inflection graphs are highly complex networks representing relationships between inflectional forms of words in human languages. For so-called synthetic languages, such as Latin or Polish, they have particularly interesting structure due to the abundance of inflectional forms. We construct the simplest form of inflection graphs, namely a bipartite graph in which one group of vertices corresponds to dictionary headwords and the other group to inflected forms encountered in a given text. We, then, study projection of this graph on the set of headwords. The projection decomposes into a large number of connected components, to be called word groups. Distribution of sizes of word group exhibits some remarkable properties, resembling cluster distribution in a lattice percolation near the critical point. We propose a simple model which produces graphs of this type, reproducing the desired component distribution and other topological features.
Model-based morphological segmentation and labeling of coronary angiograms.
Haris, K; Efstratiadis, S N; Maglaveras, N; Pappas, C; Gourassas, J; Louridas, G
1999-10-01
A method for extraction and labeling of the coronary arterial tree (CAT) using minimal user supervision in single-view angiograms is proposed. The CAT structural description (skeleton and borders) is produced, along with quantitative information for the artery dimensions and assignment of coded labels, based on a given coronary artery model represented by a graph. The stages of the method are: 1) CAT tracking and detection; 2) artery skeleton and border estimation; 3) feature graph creation; and iv) artery labeling by graph matching. The approximate CAT centerline and borders are extracted by recursive tracking based on circular template analysis. The accurate skeleton and borders of each CAT segment are computed, based on morphological homotopy modification and watershed transform. The approximate centerline and borders are used for constructing the artery segment enclosing area (ASEA), where the defined skeleton and border curves are considered as markers. Using the marked ASEA, an artery gradient image is constructed where all the ASEA pixels (except the skeleton ones) are assigned the gradient magnitude of the original image. The artery gradient image markers are imposed as its unique regional minima by the homotopy modification method, the watershed transform is used for extracting the artery segment borders, and the feature graph is updated. Finally, given the created feature graph and the known model graph, a graph matching algorithm assigns the appropriate labels to the extracted CAT using weighted maximal cliques on the association graph corresponding to the two given graphs. Experimental results using clinical digitized coronary angiograms are presented.
Jongeneelen, Frans J; Berge, Wil F Ten
2011-10-01
Physiologically based toxicokinetic (PBTK) models are computational tools, which simulate the absorption, distribution, metabolism, and excretion of chemicals. The purpose of this study was to develop a physiologically based pharmacokinetic (PBPK) model with a high level of transparency. The model should be able to predict blood and urine concentrations of environmental chemicals and metabolites, given a certain environmental or occupational exposure scenario. The model refers to a reference human of 70 kg. The partition coefficients of the parent compound and its metabolites (blood:air and tissue:blood partition coefficients of 11 organs) are estimated by means of quantitative structure-property relationship, in which five easily available physicochemical properties of the compound are the independent parameters. The model gives a prediction of the fate of the compound, based on easily available chemical properties; therefore, it can be applied as a generic model applicable to multiple compounds. Three routes of uptake are considered (inhalation, dermal, and/or oral) as well as two built-in exercise levels (at rest and at light work). Dermal uptake is estimated by the use of a dermal diffusion-based module that considers dermal deposition rate and duration of deposition. Moreover, evaporation during skin contact is fully accounted for and related to the volatility of the substance. Saturable metabolism according to Michaelis-Menten kinetics can be modelled in any of 11 organs/tissues or in liver only. Renal tubular resorption is based on a built-in algorithm, dependent on the (log) octanol:water partition coefficient. Enterohepatic circulation is optional at a user-defined rate. The generic PBTK model is available as a spreadsheet application in MS Excel. The differential equations of the model are programmed in Visual Basic. Output is presented as numerical listing over time in tabular form and in graphs. The MS Excel application of the PBTK model is available as freeware. The accuracy of the model prediction is illustrated by simulating experimental observations. Published experimental inhalation and dermal exposure studies on a series of different chemicals (pyrene, N-methyl-pyrrolidone, methyl-tert-butylether, heptane, 2-butoxyethanol, and ethanol) were selected to compare the observed data with the model-simulated data. The examples show that the model-predicted concentrations in blood and/or urine after inhalation and/or transdermal uptake have an accuracy of within an order of magnitude. It is advocated that this PBTK model, called IndusChemFate, is suitable for 'first tier assessments' and for early explorations of the fate of chemicals and/or metabolites in the human body. The availability of a simple model with a minimum burden of input information on the parent compound and its metabolites might be a stimulation to apply PBTK modelling more often in the field of biomonitoring and exposure science.
Transactions in domain-specific information systems
NASA Astrophysics Data System (ADS)
Zacek, Jaroslav
2017-07-01
Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.
Temporal Representation in Semantic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levandoski, J J; Abdulla, G M
2007-08-07
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.
Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard
2014-01-01
To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.
Shrank, William H.; Cadarette, Suzanne M.; Cox, Emily; Fischer, Michael A.; Mehta, Jyotsna; Brookhart, Alan M.; Avorn, Jerry; Choudhry, Niteesh K.
2009-01-01
Background Insurers and policymakers strive to stimulate more cost-effective prescribing and, increasingly, are educating beneficiaries about generics. Objectives To evaluate the relationship between patient beliefs and communication about generic drugs and actual drug use. Research Design and Subjects We performed a national mailed survey of a random sample of 2500 commercially-insured adults. Patient responses were linked to pharmacy claims data to assess actual generic medication use. Measures We used factor analysis to develop 5 multi-item scales from patient survey responses that measured: (1) general preferences for generics, (2) generic safety/effectiveness, (3) generic cost/value, (4) comfort with generic substitution, and (5) communication with providers about generics. The relationship between each scale and the proportion of prescriptions filled for generics was assessed using linear regression, controlling for demographic, health, and insurance characteristics. Separate models were created for each scale and then all 5 scales were included simultaneously in a fully-adjusted model. Results The usable response rate was 48%. When evaluated independently, a 1 SD increase in each of the 5 scales was associated with a 3.1% to 6.3% increase in generic drug use (P < 0.05 for each). In the fully adjusted model, only 2 scales were significantly associated with generic drug use: comfort with generic substitution (P = 0.021) and communication with providers about generic drugs (P = 0.012). Conclusions Generic drug use is most closely associated with the 2 actionable items we evaluated: communication with providers about generics and comfort with generic substitution. Educational campaigns that focus on these 2 domains may be most effective at influencing generic drug use. PMID:19194329
The relation between prior knowledge and students' collaborative discovery learning processes
NASA Astrophysics Data System (ADS)
Gijlers, Hannie; de Jong, Ton
2005-03-01
In this study we investigate how prior knowledge influences knowledge development during collaborative discovery learning. Fifteen dyads of students (pre-university education, 15-16 years old) worked on a discovery learning task in the physics field of kinematics. The (face-to-face) communication between students was recorded and the interaction with the environment was logged. Based on students' individual judgments of the truth-value and testability of a series of domain-specific propositions, a detailed description of the knowledge configuration for each dyad was created before they entered the learning environment. Qualitative analyses of two dialogues illustrated that prior knowledge influences the discovery learning processes, and knowledge development in a pair of students. Assessments of student and dyad definitional (domain-specific) knowledge, generic (mathematical and graph) knowledge, and generic (discovery) skills were related to the students' dialogue in different discovery learning processes. Results show that a high level of definitional prior knowledge is positively related to the proportion of communication regarding the interpretation of results. Heterogeneity with respect to generic prior knowledge was positively related to the number of utterances made in the discovery process categories hypotheses generation and experimentation. Results of the qualitative analyses indicated that collaboration between extremely heterogeneous dyads is difficult when the high achiever is not willing to scaffold information and work in the low achiever's zone of proximal development.
Pogliani, Lionello
2010-01-30
Twelve properties of a highly heterogeneous class of organic solvents have been modeled with a graph-theoretical molecular connectivity modified (MC) method, which allows to encode the core electrons and the hydrogen atoms. The graph-theoretical method uses the concepts of simple, general, and complete graphs, where these last types of graphs are used to encode the core electrons. The hydrogen atoms have been encoded by the aid of a graph-theoretical perturbation parameter, which contributes to the definition of the valence delta, delta(v), a key parameter in molecular connectivity studies. The model of the twelve properties done with a stepwise search algorithm is always satisfactory, and it allows to check the influence of the hydrogen content of the solvent molecules on the choice of the type of descriptor. A similar argument holds for the influence of the halogen atoms on the type of core electron representation. In some cases the molar mass, and in a minor way, special "ad hoc" parameters have been used to improve the model. A very good model of the surface tension could be obtained by the aid of five experimental parameters. A mixed model method based on experimental parameters plus molecular connectivity indices achieved, instead, to consistently improve the model quality of five properties. To underline is the importance of the boiling point temperatures as descriptors in these last two model methodologies. Copyright 2009 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.
2006-01-01
The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeure, I.M.
The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Two classes of bipartite networks: nested biological and social systems.
Burgos, Enrique; Ceva, Horacio; Hernández, Laura; Perazzo, R P J; Devoto, Mariano; Medan, Diego
2008-10-01
Bipartite graphs have received some attention in the study of social networks and of biological mutualistic systems. A generalization of a previous model is presented, that evolves the topology of the graph in order to optimally account for a given contact preference rule between the two guilds of the network. As a result, social and biological graphs are classified as belonging to two clearly different classes. Projected graphs, linking the agents of only one guild, are obtained from the original bipartite graph. The corresponding evolution of its statistical properties is also studied. An example of a biological mutualistic network is analyzed in detail, and it is found that the model provides a very good fitting of all the main statistical features. The model also provides a proper qualitative description of the same features observed in social webs, suggesting the possible reasons underlying the difference in the organization of these two kinds of bipartite networks.
A Wave Chaotic Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan
Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Granda, Jose J.
2003-01-01
Conceptually, modeling of flexible, multi-body systems involves a formulation as a set of time-dependent partial differential equations. However, for practical, engineering purposes, this modeling is usually done using the method of Finite Elements, which approximates the set of partial differential equations, thus generalizing the approach to all continuous media. This research investigates the links between the Bond Graph method and the classical methods used to develop system models and advocates the Bond Graph Methodology and current bond graph tools as alternate approaches that will lead to a quick and precise understanding of a flexible multi-body system under automatic control. For long endurance, complex spacecraft, because of articulation and mission evolution the model of the physical system may change frequently. So a method of automatic generation and regeneration of system models that does not lead to implicit equations, as does the Lagrange equation approach, is desirable. The bond graph method has been shown to be amenable to automatic generation of equations with appropriate consideration of causality. Indeed human-interactive software now exists that automatically generates both symbolic and numeric system models and evaluates causality as the user develops the model, e.g. the CAMP-G software package. In this paper the CAMP-G package is used to generate a bond graph model of the International Space Station (ISS) at an early stage in its assembly, Zvezda. The ISS is an ideal example because it is a collection of bodies that are articulated, many of which are highly flexible. Also many reaction jets are used to control translation and attitude, and many electric motors are used to articulate appendages, which consist of photovoltaic arrays and composite assemblies. The Zvezda bond graph model is compared to an existing model, which was generated by the NASA Johnson Space Center during the Verification and Analysis Cycle of Zvezda.
Graphing trillions of triangles.
Burkhardt, Paul
2017-07-01
The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed.
GraphCrunch 2: Software tool for network modeling, alignment and clustering.
Kuchaiev, Oleksii; Stevanović, Aleksandar; Hayes, Wayne; Pržulj, Nataša
2011-01-19
Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, GraphCruch 2 implements an algorithm for clustering nodes within a network based solely on their topological similarities. Using GraphCrunch 2, we demonstrate that eukaryotic and viral PPI networks may belong to different graph model families and show that topology-based clustering can reveal important functional similarities between proteins within yeast and human PPI networks. GraphCrunch 2 is a software tool that implements the latest research on biological network analysis. It parallelizes computationally intensive tasks to fully utilize the potential of modern multi-core CPUs. It is open-source and freely available for research use. It runs under the Windows and Linux platforms.
Random graph models of social networks.
Newman, M E J; Watts, D J; Strogatz, S H
2002-02-19
We describe some new exactly solvable models of the structure of social networks, based on random graphs with arbitrary degree distributions. We give models both for simple unipartite networks, such as acquaintance networks, and bipartite networks, such as affiliation networks. We compare the predictions of our models to data for a number of real-world social networks and find that in some cases, the models are in remarkable agreement with the data, whereas in others the agreement is poorer, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.
One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.
Biswas, Sujoy Kumar; Milanfar, Peyman
2016-03-01
One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.
Bond Graph Model of Cerebral Circulation: Toward Clinically Feasible Systemic Blood Flow Simulations
Safaei, Soroush; Blanco, Pablo J.; Müller, Lucas O.; Hellevik, Leif R.; Hunter, Peter J.
2018-01-01
We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data. PMID:29551979
Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids.
José, Marco V; Morgado, Eberto R; Guimarães, Romeu Cardoso; Zamudio, Gabriel S; de Farías, Sávio Torres; Bobadilla, Juan R; Sosa, Daniela
2014-08-11
Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state.
Flexible data integration and curation using a graph-based approach.
Croset, Samuel; Rupp, Joachim; Romacker, Martin
2016-03-15
The increasing diversity of data available to the biomedical scientist holds promise for better understanding of diseases and discovery of new treatments for patients. In order to provide a complete picture of a biomedical question, data from many different origins needs to be combined into a unified representation. During this data integration process, inevitable errors and ambiguities present in the initial sources compromise the quality of the resulting data warehouse, and greatly diminish the scientific value of the content. Expensive and time-consuming manual curation is then required to improve the quality of the information. However, it becomes increasingly difficult to dedicate and optimize the resources for data integration projects as available repositories are growing both in size and in number everyday. We present a new generic methodology to identify problematic records, causing what we describe as 'data hairball' structures. The approach is graph-based and relies on two metrics traditionally used in social sciences: the graph density and the betweenness centrality. We evaluate and discuss these measures and show their relevance for flexible, optimized and automated data curation and linkage. The methodology focuses on information coherence and correctness to improve the scientific meaningfulness of data integration endeavors, such as knowledge bases and large data warehouses. samuel.croset@roche.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
Graph processing platforms at scale: practices and experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C
2015-01-01
Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less
Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests
Li, Yilei; Zhu, Zhencai; Chen, Guoan
2014-01-01
The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS) for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests. PMID:24967428
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines
Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram
2014-01-01
When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
Ivanciuc, O; Ivanciuc, T; Klein, D J; Seitz, W A; Balaban, A T
2001-02-01
Quantitative structure-retention relationships (QSRR) represent statistical models that quantify the connection between the molecular structure and the chromatographic retention indices of organic compounds, allowing the prediction of retention indices of novel, not yet synthesized compounds, solely from their structural descriptors. Using multiple linear regression, QSRR models for the gas chromatographic Kováts retention indices of 129 alkylbenzenes are generated using molecular graph descriptors. The correlational ability of structural descriptors computed from 10 molecular matrices is investigated, showing that the novel reciprocal matrices give numerical indices with improved correlational ability. A QSRR equation with 5 graph descriptors gives the best calibration and prediction results, demonstrating the usefulness of the molecular graph descriptors in modeling chromatographic retention parameters. The sequential orthogonalization of descriptors suggests simpler QSRR models by eliminating redundant structural information.
Detecting labor using graph theory on connectivity matrices of uterine EMG.
Al-Omar, S; Diab, A; Nader, N; Khalil, M; Karlsson, B; Marque, C
2015-08-01
Premature labor is one of the most serious health problems in the developed world. One of the main reasons for this is that no good way exists to distinguish true labor from normal pregnancy contractions. The aim of this paper is to investigate if the application of graph theory techniques to multi-electrode uterine EMG signals can improve the discrimination between pregnancy contractions and labor. To test our methods we first applied them to synthetic graphs where we detected some differences in the parameters results and changes in the graph model from pregnancy-like graphs to labor-like graphs. Then, we applied the same methods to real signals. We obtained the best differentiation between pregnancy and labor through the same parameters. Major improvements in differentiating between pregnancy and labor were obtained using a low pass windowing preprocessing step. Results show that real graphs generally became more organized when moving from pregnancy, where the graph showed random characteristics, to labor where the graph became a more small-world like graph.
Benchmarking Measures of Network Controllability on Canonical Graph Models
NASA Astrophysics Data System (ADS)
Wu-Yan, Elena; Betzel, Richard F.; Tang, Evelyn; Gu, Shi; Pasqualetti, Fabio; Bassett, Danielle S.
2018-03-01
The control of networked dynamical systems opens the possibility for new discoveries and therapies in systems biology and neuroscience. Recent theoretical advances provide candidate mechanisms by which a system can be driven from one pre-specified state to another, and computational approaches provide tools to test those mechanisms in real-world systems. Despite already having been applied to study network systems in biology and neuroscience, the practical performance of these tools and associated measures on simple networks with pre-specified structure has yet to be assessed. Here, we study the behavior of four control metrics (global, average, modal, and boundary controllability) on eight canonical graphs (including Erdős-Rényi, regular, small-world, random geometric, Barábasi-Albert preferential attachment, and several modular networks) with different edge weighting schemes (Gaussian, power-law, and two nonparametric distributions from brain networks, as examples of real-world systems). We observe that differences in global controllability across graph models are more salient when edge weight distributions are heavy-tailed as opposed to normal. In contrast, differences in average, modal, and boundary controllability across graph models (as well as across nodes in the graph) are more salient when edge weight distributions are less heavy-tailed. Across graph models and edge weighting schemes, average and modal controllability are negatively correlated with one another across nodes; yet, across graph instances, the relation between average and modal controllability can be positive, negative, or nonsignificant. Collectively, these findings demonstrate that controllability statistics (and their relations) differ across graphs with different topologies and that these differences can be muted or accentuated by differences in the edge weight distributions. More generally, our numerical studies motivate future analytical efforts to better understand the mathematical underpinnings of the relationship between graph topology and control, as well as efforts to design networks with specific control profiles.
2014-01-01
Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell-based models into graphs for comparison with data stored in an external data repository, has made feasible the automated development, training, and validation of computational models using morphology-based data. This work is part of an ongoing project to automate the search process, which will greatly expand our ability to identify, consider, and test biological mechanisms in the field of regenerative biology. PMID:24917489
Bond Graph Modeling of Chemiosmotic Biomolecular Energy Transduction.
Gawthrop, Peter J
2017-04-01
Engineering systems modeling and analysis based on the bond graph approach has been applied to biomolecular systems. In this context, the notion of a Faraday-equivalent chemical potential is introduced which allows chemical potential to be expressed in an analogous manner to electrical volts thus allowing engineering intuition to be applied to biomolecular systems. Redox reactions, and their representation by half-reactions, are key components of biological systems which involve both electrical and chemical domains. A bond graph interpretation of redox reactions is given which combines bond graphs with the Faraday-equivalent chemical potential. This approach is particularly relevant when the biomolecular system implements chemoelectrical transduction - for example chemiosmosis within the key metabolic pathway of mitochondria: oxidative phosphorylation. An alternative way of implementing computational modularity using bond graphs is introduced and used to give a physically based model of the mitochondrial electron transport chain To illustrate the overall approach, this model is analyzed using the Faraday-equivalent chemical potential approach and engineering intuition is used to guide affinity equalisation: a energy based analysis of the mitochondrial electron transport chain.
Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs
NASA Astrophysics Data System (ADS)
Hor, A.-H.; Jadidi, A.; Sohn, G.
2016-06-01
In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.
The Vertex Version of Weighted Wiener Number for Bicyclic Molecular Structures
Gao, Wei
2015-01-01
Graphs are used to model chemical compounds and drugs. In the graphs, each vertex represents an atom of molecule and edges between the corresponding vertices are used to represent covalent bounds between atoms. We call such a graph, which is derived from a chemical compound, a molecular graph. Evidence shows that the vertex-weighted Wiener number, which is defined over this molecular graph, is strongly correlated to both the melting point and boiling point of the compounds. In this paper, we report the extremal vertex-weighted Wiener number of bicyclic molecular graph in terms of molecular structural analysis and graph transformations. The promising prospects of the application for the chemical and pharmacy engineering are illustrated by theoretical results achieved in this paper. PMID:26640513
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
Time-dependence of graph theory metrics in functional connectivity analysis
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J.; Haneef, Zulfi; Stern, John M.
2016-01-01
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. PMID:26518632
Time-dependence of graph theory metrics in functional connectivity analysis.
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J; Haneef, Zulfi; Stern, John M
2016-01-15
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. Copyright © 2015 Elsevier Inc. All rights reserved.
The many faces of graph dynamics
NASA Astrophysics Data System (ADS)
Pignolet, Yvonne Anne; Roy, Matthieu; Schmid, Stefan; Tredan, Gilles
2017-06-01
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a ‘one fits it all’ model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
Dynamical modeling and analysis of large cellular regulatory networks
NASA Astrophysics Data System (ADS)
Bérenguier, D.; Chaouiya, C.; Monteiro, P. T.; Naldi, A.; Remy, E.; Thieffry, D.; Tichit, L.
2013-06-01
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Graphing trillions of triangles
Burkhardt, Paul
2016-01-01
The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed. PMID:28690426
An In-Depth Analysis of the Chung-Lu Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, M.; DeSterck, H.; Sanders, G.
2015-10-28
In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less
Artificial Neural Networks for Processing Graphs with Application to Image Understanding: A Survey
NASA Astrophysics Data System (ADS)
Bianchini, Monica; Scarselli, Franco
In graphical pattern recognition, each data is represented as an arrangement of elements, that encodes both the properties of each element and the relations among them. Hence, patterns are modelled as labelled graphs where, in general, labels can be attached to both nodes and edges. Artificial neural networks able to process graphs are a powerful tool for addressing a great variety of real-world problems, where the information is naturally organized in entities and relationships among entities and, in fact, they have been widely used in computer vision, f.i. in logo recognition, in similarity retrieval, and for object detection. In this chapter, we propose a survey of neural network models able to process structured information, with a particular focus on those architectures tailored to address image understanding applications. Starting from the original recursive model (RNNs), we subsequently present different ways to represent images - by trees, forests of trees, multiresolution trees, directed acyclic graphs with labelled edges, general graphs - and, correspondingly, neural network architectures appropriate to process such structures.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
Representation of activity in images using geospatial temporal graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph; McLendon, III, William C.; Parekh, Ojas D.
Various technologies pertaining to modeling patterns of activity observed in remote sensing images using geospatial-temporal graphs are described herein. Graphs are constructed by representing objects in remote sensing images as nodes, and connecting nodes with undirected edges representing either distance or adjacency relationships between objects and directed edges representing changes in time. Activity patterns may be discerned from the graphs by coding nodes representing persistent objects like buildings differently from nodes representing ephemeral objects like vehicles, and examining the geospatial-temporal relationships of ephemeral nodes within the graph.
Analysis Tools for Interconnected Boolean Networks With Biological Applications.
Chaves, Madalena; Tournier, Laurent
2018-01-01
Boolean networks with asynchronous updates are a class of logical models particularly well adapted to describe the dynamics of biological networks with uncertain measures. The state space of these models can be described by an asynchronous state transition graph, which represents all the possible exits from every single state, and gives a global image of all the possible trajectories of the system. In addition, the asynchronous state transition graph can be associated with an absorbing Markov chain, further providing a semi-quantitative framework where it becomes possible to compute probabilities for the different trajectories. For large networks, however, such direct analyses become computationally untractable, given the exponential dimension of the graph. Exploiting the general modularity of biological systems, we have introduced the novel concept of asymptotic graph , computed as an interconnection of several asynchronous transition graphs and recovering all asymptotic behaviors of a large interconnected system from the behavior of its smaller modules. From a modeling point of view, the interconnection of networks is very useful to address for instance the interplay between known biological modules and to test different hypotheses on the nature of their mutual regulatory links. This paper develops two new features of this general methodology: a quantitative dimension is added to the asymptotic graph, through the computation of relative probabilities for each final attractor and a companion cross-graph is introduced to complement the method on a theoretical point of view.
Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids
José, Marco V.; Morgado, Eberto R.; Guimarães, Romeu Cardoso; Zamudio, Gabriel S.; de Farías, Sávio Torres; Bobadilla, Juan R.; Sosa, Daniela
2014-01-01
Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state. PMID:25370377
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Phase transitions in Ising models on directed networks
NASA Astrophysics Data System (ADS)
Lipowski, Adam; Ferreira, António Luis; Lipowska, Dorota; Gontarek, Krzysztof
2015-11-01
We examine Ising models with heat-bath dynamics on directed networks. Our simulations show that Ising models on directed triangular and simple cubic lattices undergo a phase transition that most likely belongs to the Ising universality class. On the directed square lattice the model remains paramagnetic at any positive temperature as already reported in some previous studies. We also examine random directed graphs and show that contrary to undirected ones, percolation of directed bonds does not guarantee ferromagnetic ordering. Only above a certain threshold can a random directed graph support finite-temperature ferromagnetic ordering. Such behavior is found also for out-homogeneous random graphs, but in this case the analysis of magnetic and percolative properties can be done exactly. Directed random graphs also differ from undirected ones with respect to zero-temperature freezing. Only at low connectivity do they remain trapped in a disordered configuration. Above a certain threshold, however, the zero-temperature dynamics quickly drives the model toward a broken symmetry (magnetized) state. Only above this threshold, which is almost twice as large as the percolation threshold, do we expect the Ising model to have a positive critical temperature. With a very good accuracy, the behavior on directed random graphs is reproduced within a certain approximate scheme.
Modeling flow and transport in fracture networks using graphs
NASA Astrophysics Data System (ADS)
Karra, S.; O'Malley, D.; Hyman, J. D.; Viswanathan, H. S.; Srinivasan, G.
2018-03-01
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O (104) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.
Modeling flow and transport in fracture networks using graphs.
Karra, S; O'Malley, D; Hyman, J D; Viswanathan, H S; Srinivasan, G
2018-03-01
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O(10^{4}) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.
Modeling flow and transport in fracture networks using graphs
Karra, S.; O'Malley, D.; Hyman, J. D.; ...
2018-03-09
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less
Modeling flow and transport in fracture networks using graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karra, S.; O'Malley, D.; Hyman, J. D.
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Proximity Networks and Epidemics
NASA Astrophysics Data System (ADS)
Guclu, Hasan; Toroczkai, Zoltán
2007-03-01
We presented the basis of a framework to account for the dynamics of contacts in epidemic processes, through the notion of dynamic proximity graphs. By varying the integration time-parameter T, which is the period of infectivity one can give a simple account for some of the differences in the observed contact networks for different diseases, such as smallpox, or AIDS. Our simplistic model also seems to shed some light on the shape of the degree distribution of the measured people-people contact network from the EPISIM data. We certainly do not claim that the simplistic graph integration model above is a good model for dynamic contact graphs. It only contains the essential ingredients for such processes to produce a qualitative agreement with some observations. We expect that further refinements and extensions to this picture, in particular deriving the link-probabilities in the dynamic proximity graph from more realistic contact dynamics should improve the agreement between models and data.
An Interactive Teaching System for Bond Graph Modeling and Simulation in Bioengineering
ERIC Educational Resources Information Center
Roman, Monica; Popescu, Dorin; Selisteanu, Dan
2013-01-01
The objective of the present work was to implement a teaching system useful in modeling and simulation of biotechnological processes. The interactive system is based on applications developed using 20-sim modeling and simulation software environment. A procedure for the simulation of bioprocesses modeled by bond graphs is proposed and simulators…
Supplantation of Mental Operations on Graphs
ERIC Educational Resources Information Center
Vogel, Markus; Girwidz, Raimund; Engel, Joachim
2007-01-01
Research findings show the difficulties younger students have in working with graphs. Higher mental operations are necessary for a skilled interpretation of abstract representations. We suggest connecting a concrete representation of the modeled problem with the related graph. The idea is to illustrate essential mental operations externally. This…
Hierarchical graphs for better annotations of rule-based models of biochemical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Bin; Hlavacek, William
2009-01-01
In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of amore » molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.« less
Criticality in conserved dynamical systems: experimental observation vs. exact properties.
Marković, Dimitrije; Gros, Claudius; Schuelein, André
2013-03-01
Conserved dynamical systems are generally considered to be critical. We study a class of critical routing models, equivalent to random maps, which can be solved rigorously in the thermodynamic limit. The information flow is conserved for these routing models and governed by cyclic attractors. We consider two classes of information flow, Markovian routing without memory and vertex routing involving a one-step routing memory. Investigating the respective cycle length distributions for complete graphs, we find log corrections to power-law scaling for the mean cycle length, as a function of the number of vertices, and a sub-polynomial growth for the overall number of cycles. When observing experimentally a real-world dynamical system one normally samples stochastically its phase space. The number and the length of the attractors are then weighted by the size of their respective basins of attraction. This situation is equivalent, for theory studies, to "on the fly" generation of the dynamical transition probabilities. For the case of vertex routing models, we find in this case power law scaling for the weighted average length of attractors, for both conserved routing models. These results show that the critical dynamical systems are generically not scale-invariant but may show power-law scaling when sampled stochastically. It is hence important to distinguish between intrinsic properties of a critical dynamical system and its behavior that one would observe when randomly probing its phase space.
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Dynamics of Nearest-Neighbour Competitions on Graphs
NASA Astrophysics Data System (ADS)
Rador, Tonguç
2017-10-01
Considering a collection of agents representing the vertices of a graph endowed with integer points, we study the asymptotic dynamics of the rate of the increase of their points according to a very simple rule: we randomly pick an an edge from the graph which unambiguously defines two agents we give a point the the agent with larger point with probability p and to the lagger with probability q such that p+q=1. The model we present is the most general version of the nearest-neighbour competition model introduced by Ben-Naim, Vazquez and Redner. We show that the model combines aspects of hyperbolic partial differential equations—as that of a conservation law—graph colouring and hyperplane arrangements. We discuss the properties of the model for general graphs but we confine in depth study to d-dimensional tori. We present a detailed study for the ring graph, which includes a chemical potential approximation to calculate all its statistics that gives rather accurate results. The two-dimensional torus, not studied in depth as the ring, is shown to possess critical behaviour in that the asymptotic speeds arrange themselves in two-coloured islands separated by borders of three other colours and the size of the islands obey power law distribution. We also show that in the large d limit the d-dimensional torus shows inverse sine law for the distribution of asymptotic speeds.
Customization of a generic 3D model of the distal femur using diagnostic radiographs.
Schmutz, B; Reynolds, K J; Slavotinek, J P
2008-01-01
A method for the customization of a generic 3D model of the distal femur is presented. The customization method involves two steps: acquisition of calibrated orthogonal planar radiographs; and linear scaling of the generic model based on the width of a subject's femoral condyles as measured on the planar radiographs. Planar radiographs of seven intact lower cadaver limbs were obtained. The customized generic models were validated by comparing their surface geometry with that of CT-reconstructed reference models. The overall mean error was 1.2 mm. The results demonstrate that uniform scaling as a first step in the customization process produced a base model of accuracy comparable to other models reported in the literature.
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
NLO predictions for the production of a spin-two particle at the LHC
Das, Goutam; Degrande, Céline; Hirschi, Valentin; ...
2017-05-08
We obtain predictions accurate at the next-to-leading order in QCD for the production of a generic spin-two particle in the most relevant channels at the LHC: production in association with coloured particles (inclusive, one jet, two jets andmore » $$t\\bar t$$), with vector bosons ($$Z,W^\\pm,\\gamma$$) and with the Higgs boson. Here, we present total and differential cross sections as well as branching ratios as a function of the mass and the collision energy also considering the case of non-universal couplings to standard model particles. We find that the next-to-leading order corrections give rise to sizeable $K$ factors for many channels, in some cases exposing the unitarity-violating behaviour of non-universal couplings scenarios, and in general greatly reduce the theoretical uncertainties. Our predictions are publicly available in the MadGraph5_aMC@NLO framework and can, therefore, be directly used in experimental simulations of spin-two particle production for arbitrary values of the mass and couplings.« less
NLO predictions for the production of a spin-two particle at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Goutam; Degrande, Céline; Hirschi, Valentin
We obtain predictions accurate at the next-to-leading order in QCD for the production of a generic spin-two particle in the most relevant channels at the LHC: production in association with coloured particles (inclusive, one jet, two jets andmore » $$t\\bar t$$), with vector bosons ($$Z,W^\\pm,\\gamma$$) and with the Higgs boson. Here, we present total and differential cross sections as well as branching ratios as a function of the mass and the collision energy also considering the case of non-universal couplings to standard model particles. We find that the next-to-leading order corrections give rise to sizeable $K$ factors for many channels, in some cases exposing the unitarity-violating behaviour of non-universal couplings scenarios, and in general greatly reduce the theoretical uncertainties. Our predictions are publicly available in the MadGraph5_aMC@NLO framework and can, therefore, be directly used in experimental simulations of spin-two particle production for arbitrary values of the mass and couplings.« less
A nonlinear q-voter model with deadlocks on the Watts-Strogatz graph
NASA Astrophysics Data System (ADS)
Sznajd-Weron, Katarzyna; Michal Suszczynski, Karol
2014-07-01
We study the nonlinear $q$-voter model with deadlocks on a Watts-Strogats graph. Using Monte Carlo simulations, we obtain so called exit probability and exit time. We determine how network properties, such as randomness or density of links influence exit properties of a model.
Anderson localization for radial tree-like random quantum graphs
NASA Astrophysics Data System (ADS)
Hislop, Peter D.; Post, Olaf
We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Connections between the Sznajd model with general confidence rules and graph theory
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2012-10-01
The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).
Computing Information Value from RDF Graph Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Heileman, Gregory
2010-11-08
Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We computemore » information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.« less
Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.
ERIC Educational Resources Information Center
Alston, Richard M.; Chi, Wan Fu
1989-01-01
Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…
Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xuanhua; Luo, Xuan; Liang, Junling
GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and datasets, Frog is able to significantly outperform existing GPU-based graph processing systems except Gunrock and MapGraph. MapGraph gets better performance than Frog when running BFS on RoadNet-CA. The comparison between Gunrock and Frog is inconclusive. Frog can outperform Gunrock more than 1.04X when running PageRank and SSSP, while the advantage of Frog is not obvious when running BFS and CC on some datasets especially for RoadNet-CA.« less
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
A strand graph semantics for DNA-based computation
Petersen, Rasmus L.; Lakin, Matthew R.; Phillips, Andrew
2015-01-01
DNA nanotechnology is a promising approach for engineering computation at the nanoscale, with potential applications in biofabrication and intelligent nanomedicine. DNA strand displacement is a general strategy for implementing a broad range of nanoscale computations, including any computation that can be expressed as a chemical reaction network. Modelling and analysis of DNA strand displacement systems is an important part of the design process, prior to experimental realisation. As experimental techniques improve, it is important for modelling languages to keep pace with the complexity of structures that can be realised experimentally. In this paper we present a process calculus for modelling DNA strand displacement computations involving rich secondary structures, including DNA branches and loops. We prove that our calculus is also sufficiently expressive to model previous work on non-branching structures, and propose a mapping from our calculus to a canonical strand graph representation, in which vertices represent DNA strands, ordered sites represent domains, and edges between sites represent bonds between domains. We define interactions between strands by means of strand graph rewriting, and prove the correspondence between the process calculus and strand graph behaviours. Finally, we propose a mapping from strand graphs to an efficient implementation, which we use to perform modelling and simulation of DNA strand displacement systems with rich secondary structure. PMID:27293306
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.
Leotta, Matthew J; Mundy, Joseph L
2011-07-01
In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.
The Full Ward-Takahashi Identity for Colored Tensor Models
NASA Astrophysics Data System (ADS)
Pérez-Sánchez, Carlos I.
2018-03-01
Colored tensor models (CTM) is a random geometrical approach to quantum gravity. We scrutinize the structure of the connected correlation functions of general CTM-interactions and organize them by boundaries of Feynman graphs. For rank- D interactions including, but not restricted to, all melonic φ^4 -vertices—to wit, solely those quartic vertices that can lead to dominant spherical contributions in the large- N expansion—the aforementioned boundary graphs are shown to be precisely all (possibly disconnected) vertex-bipartite regularly edge- D-colored graphs. The concept of CTM-compatible boundary-graph automorphism is introduced and an auxiliary graph calculus is developed. With the aid of these constructs, certain U (∞)-invariance of the path integral measure is fully exploited in order to derive a strong Ward-Takahashi Identity for CTMs with a symmetry-breaking kinetic term. For the rank-3 φ^4 -theory, we get the exact integral-like equation for the 2-point function. Similarly, exact equations for higher multipoint functions can be readily obtained departing from this full Ward-Takahashi identity. Our results hold for some Group Field Theories as well. Altogether, our non-perturbative approach trades some graph theoretical methods for analytical ones. We believe that these tools can be extended to tensorial SYK-models.
Huang, Xiaoke; Zhao, Ye; Yang, Jing; Zhang, Chong; Ma, Chao; Ye, Xinyue
2016-01-01
We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
NASA Astrophysics Data System (ADS)
Viana, Ilisio; Orteu, Jean-José; Cornille, Nicolas; Bugarin, Florian
2015-11-01
We focus on quality control of mechanical parts in aeronautical context using a single pan-tilt-zoom (PTZ) camera and a computer-aided design (CAD) model of the mechanical part. We use the CAD model to create a theoretical image of the element to be checked, which is further matched with the sensed image of the element to be inspected, using a graph theory-based approach. The matching is carried out in two stages. First, the two images are used to create two attributed graphs representing the primitives (ellipses and line segments) in the images. In the second stage, the graphs are matched using a similarity function built from the primitive parameters. The similarity scores of the matching are injected in the edges of a bipartite graph. A best-match-search procedure in the bipartite graph guarantees the uniqueness of the match solution. The method achieves promising performance in tests with synthetic data including missing elements, displaced elements, size changes, and combinations of these cases. The results open good prospects for using the method with realistic data.
Graph Structure in Three National Academic Webs: Power Laws with Anomalies.
ERIC Educational Resources Information Center
Thelwall, Mike; Wilkinson, David
2003-01-01
Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)
GraphStore: A Distributed Graph Storage System for Big Data Networks
ERIC Educational Resources Information Center
Martha, VenkataSwamy
2013-01-01
Networks, such as social networks, are a universal solution for modeling complex problems in real time, especially in the Big Data community. While previous studies have attempted to enhance network processing algorithms, none have paved a path for the development of a persistent storage system. The proposed solution, GraphStore, provides an…
Graph rigidity, cyclic belief propagation, and point pattern matching.
McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S
2008-11-01
A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.
Phase transitions in distributed control systems with multiplicative noise
NASA Astrophysics Data System (ADS)
Allegra, Nicolas; Bamieh, Bassam; Mitra, Partha; Sire, Clément
2018-01-01
Contemporary technological challenges often involve many degrees of freedom in a distributed or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a graph with limited communication resources, hindering centralized control; the communication is subject to noise; and the number of variables can be very large. These three aspects make tools and techniques from statistical physics particularly suitable for the performance analysis of such networked systems in the limit of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly, phase-transition like phenomena appear in these systems, where a sharp change in performance can be observed with a smooth parameter variation, with the change becoming discontinuous or singular in the limit of infinite system size. In this paper, we analyze the so called network consensus problem, prototypical of the above considerations, that has previously been analyzed mostly in the context of additive noise. We show that qualitatively new phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending on dimensions, and on the presence or absence of a conservation law, the system performance shows a discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for certain classes of centralized control problems. Although our study involves a ‘toy’ model, we believe that the qualitative features are generic, with implications for the robust stability of distributed control systems, as well as the effect of roundoff errors and communication noise on distributed algorithms.
SpectralNET – an application for spectral graph analysis and visualization
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-01-01
Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170
SpectralNET--an application for spectral graph analysis and visualization.
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-10-19
Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.
From brain topography to brain topology: relevance of graph theory to functional neuroscience.
Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis
2013-07-10
Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.
Demongeot, Jacques; Ben Amor, Hedi; Elena, Adrien; Gillois, Pierre; Noual, Mathilde; Sené, Sylvain
2009-01-01
Regulatory interaction networks are often studied on their dynamical side (existence of attractors, study of their stability). We focus here also on their robustness, that is their ability to offer the same spatiotemporal patterns and to resist to external perturbations such as losses of nodes or edges in the networks interactions architecture, changes in their environmental boundary conditions as well as changes in the update schedule (or updating mode) of the states of their elements (e.g., if these elements are genes, their synchronous coexpression mode versus their sequential expression). We define the generic notions of boundary, core, and critical vertex or edge of the underlying interaction graph of the regulatory network, whose disappearance causes dramatic changes in the number and nature of attractors (e.g., passage from a bistable behaviour to a unique periodic regime) or in the range of their basins of stability. The dynamic transition of states will be presented in the framework of threshold Boolean automata rules. A panorama of applications at different levels will be given: brain and plant morphogenesis, bulbar cardio-respiratory regulation, glycolytic/oxidative metabolic coupling, and eventually cell cycle and feather morphogenesis genetic control. PMID:20057955
Graph cuts for curvature based image denoising.
Bae, Egil; Shi, Juan; Tai, Xue-Cheng
2011-05-01
Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.
Topological Characterization of Carbon Graphite and Crystal Cubic Carbon Structures.
Siddiqui, Wei Gao Muhammad Kamran; Naeem, Muhammad; Rehman, Najma Abdul
2017-09-07
Graph theory is used for modeling, designing, analysis and understanding chemical structures or chemical networks and their properties. The molecular graph is a graph consisting of atoms called vertices and the chemical bond between atoms called edges. In this article, we study the chemical graphs of carbon graphite and crystal structure of cubic carbon. Moreover, we compute and give closed formulas of degree based additive topological indices, namely hyper-Zagreb index, first multiple and second multiple Zagreb indices, and first and second Zagreb polynomials.
An Xdata Architecture for Federated Graph Models and Multi-tier Asymmetric Computing
2014-01-01
Wikipedia, a scale-free random graph (kron), Akamai trace route data, Bitcoin transaction data, and a Twitter follower network. We present results for...3x (SSSP on a random graph) and nearly 300x (Akamai and Bitcoin ) over the CPU performance of a well-known and widely deployed CPU-based graph...provided better throughput for smaller frontiers such as roadmaps or the Bitcoin data set. In our work, we have focused on two-phase kernels, but it
Cheng, Ning; Banerjee, Tannista; Qian, Jingjing; Hansen, Richard A
Prior research suggests that authorized generic drugs increase competition and decrease prices, but little empirical evidence supports this conclusion. This study evaluated the impact of authorized generic marketing on brand and generic prices. Longitudinal analysis of the household component of the Medical Expenditure Panel Survey. Interview panels over 12 years, with a new panel each year. For each panel, 5 rounds of household interviews were conducted over 30 months. Nationally representative sample of the U.S. civilian noninstitutionalized population, focusing on people using 1 of 5 antidepressant drugs that became generically available between 2000 to 2011. Drugs and dose/formulations with versus without an authorized generic drug marketed. Multiple linear regression models with lagged variables evaluated the effect of an authorized generic on average inflation-adjusted brand and generic price, adjusting for payment sources, generic entry time, competitor price, and year. During 2000-2011, annual brand antidepressant utilization decreased from 51.47 to 7.52 million prescriptions, and generic antidepressant utilization increased from 0 to 88.83 million prescriptions. Over time, payment per prescription for brand prescriptions increased 25% overall, and generic payments decreased 70% for all payer types. With unadjusted data, after generic entry the average brand price decreased $0.59 per year with and $3.62 per year without an authorized generic in the market. Average generic prices decreased $10.30 per year with and $8.47 per year without an authorized generic in the market. In multiple regression models with lagged variables adjusted for heteroscedasticity, payer source, time since generic entry, competitor price, and year, authorized generics significantly reduced average payment for generic (-$3.03) and brand (-$60.64) prescriptions, and over time this price change slowly diminished. Availability of an authorized generic was associated with reduced average generic and brand price in the antidepressant market, supporting prior evidences. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Nagoor Gani, A; Latha, S R
2016-01-01
A Hamiltonian cycle in a graph is a cycle that visits each node/vertex exactly once. A graph containing a Hamiltonian cycle is called a Hamiltonian graph. There have been several researches to find the number of Hamiltonian cycles of a Hamilton graph. As the number of vertices and edges grow, it becomes very difficult to keep track of all the different ways through which the vertices are connected. Hence, analysis of large graphs can be efficiently done with the assistance of a computer system that interprets graphs as matrices. And, of course, a good and well written algorithm will expedite the analysis even faster. The most convenient way to quickly test whether there is an edge between two vertices is to represent graphs using adjacent matrices. In this paper, a new algorithm is proposed to find fuzzy Hamiltonian cycle using adjacency matrix and the degree of the vertices of a fuzzy graph. A fuzzy graph structure is also modeled to illustrate the proposed algorithms with the selected air network of Indigo airlines.
Measuring Graph Comprehension, Critique, and Construction in Science
NASA Astrophysics Data System (ADS)
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-08-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed items to measure graph comprehension, critique, and construction and developed scoring rubrics based on the knowledge integration (KI) framework. We administered the items to over 460 middle school students. We found that the items formed a coherent scale and had good reliability using both item response theory and classical test theory. The KI scoring rubric showed that most students had difficulty linking graphs features to science concepts, especially when asked to critique or construct graphs. In addition, students with limited access to computers as well as those who speak a language other than English at home have less integrated understanding than others. These findings point to the need to increase the integration of graphing into science instruction. The results suggest directions for further research leading to comprehensive assessments of graph understanding.
On a programming language for graph algorithms
NASA Technical Reports Server (NTRS)
Rheinboldt, W. C.; Basili, V. R.; Mesztenyi, C. K.
1971-01-01
An algorithmic language, GRAAL, is presented for describing and implementing graph algorithms of the type primarily arising in applications. The language is based on a set algebraic model of graph theory which defines the graph structure in terms of morphisms between certain set algebraic structures over the node set and arc set. GRAAL is modular in the sense that the user specifies which of these mappings are available with any graph. This allows flexibility in the selection of the storage representation for different graph structures. In line with its set theoretic foundation, the language introduces sets as a basic data type and provides for the efficient execution of all set and graph operators. At present, GRAAL is defined as an extension of ALGOL 60 (revised) and its formal description is given as a supplement to the syntactic and semantic definition of ALGOL. Several typical graph algorithms are written in GRAAL to illustrate various features of the language and to show its applicability.
NASA Astrophysics Data System (ADS)
Kase, Sue E.; Vanni, Michelle; Knight, Joanne A.; Su, Yu; Yan, Xifeng
2016-05-01
Within operational environments decisions must be made quickly based on the information available. Identifying an appropriate knowledge base and accurately formulating a search query are critical tasks for decision-making effectiveness in dynamic situations. The spreading of graph data management tools to access large graph databases is a rapidly emerging research area of potential benefit to the intelligence community. A graph representation provides a natural way of modeling data in a wide variety of domains. Graph structures use nodes, edges, and properties to represent and store data. This research investigates the advantages of information search by graph query initiated by the analyst and interactively refined within the contextual dimensions of the answer space toward a solution. The paper introduces SLQ, a user-friendly graph querying system enabling the visual formulation of schemaless and structureless graph queries. SLQ is demonstrated with an intelligence analyst information search scenario focused on identifying individuals responsible for manufacturing a mosquito-hosted deadly virus. The scenario highlights the interactive construction of graph queries without prior training in complex query languages or graph databases, intuitive navigation through the problem space, and visualization of results in graphical format.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
Approximate ground states of the random-field Potts model from graph cuts
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay
2018-05-01
While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
Flows in a tube structure: Equation on the graph
NASA Astrophysics Data System (ADS)
Panasenko, Grigory; Pileckas, Konstantin
2014-08-01
The steady-state Navier-Stokes equations in thin structures lead to some elliptic second order equation for the macroscopic pressure on a graph. At the nodes of the graph the pressure satisfies Kirchoff-type junction conditions. In the non-steady case the problem for the macroscopic pressure on the graph becomes nonlocal in time. In the paper we study the existence and uniqueness of a solution to such one-dimensional model on the graph for a pipe-wise network. We also prove the exponential decay of the solution with respect to the time variable in the case when the data decay exponentially with respect to time.
Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.
Shang, Yilun
2015-01-01
Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Generic NICA-Donnan model parameters for metal-ion binding by humic substances.
Milne, Christopher J; Kinniburgh, David G; van Riemsdijk, Willem H; Tipping, Edward
2003-03-01
A total of 171 datasets of literature and experimental data for metal-ion binding by fulvic and humic acids have been digitized and re-analyzed using the NICA-Donnan model. Generic parameter values have been derived that can be used for modeling in the absence of specific metalion binding measurements. These values complement the previously derived generic descriptions of proton binding. For ions where the ranges of pH, concentration, and ionic strength conditions are well covered by the available data,the generic parameters successfully describe the metalion binding behavior across a very wide range of conditions and for different humic and fulvic acids. Where published data for other metal ions are too sparse to constrain the model well, generic parameters have been estimated by interpolating trends observable in the parameter values of the well-defined data. Recommended generic NICA-Donnan model parameters are provided for 23 metal ions (Al, Am, Ba, Ca, Cd, Cm, Co, CrIII, Cu, Dy, Eu, FeII, FeIII, Hg, Mg, Mn, Ni, Pb, Sr, Thv, UVIO2, VIIIO, and Zn) for both fulvic and humic acids. These parameters probably represent the best NICA-Donnan description of metal-ion binding that can be achieved using existing data.
Zhang, Pin; Liang, Yanmei; Chang, Shengjiang; Fan, Hailun
2013-08-01
Accurate segmentation of renal tissues in abdominal computed tomography (CT) image sequences is an indispensable step for computer-aided diagnosis and pathology detection in clinical applications. In this study, the goal is to develop a radiology tool to extract renal tissues in CT sequences for the management of renal diagnosis and treatments. In this paper, the authors propose a new graph-cuts-based active contours model with an adaptive width of narrow band for kidney extraction in CT image sequences. Based on graph cuts and contextual continuity, the segmentation is carried out slice-by-slice. In the first stage, the middle two adjacent slices in a CT sequence are segmented interactively based on the graph cuts approach. Subsequently, the deformable contour evolves toward the renal boundaries by the proposed model for the kidney extraction of the remaining slices. In this model, the energy function combining boundary with regional information is optimized in the constructed graph and the adaptive search range is determined by contextual continuity and the object size. In addition, in order to reduce the complexity of the min-cut computation, the nodes in the graph only have n-links for fewer edges. The total 30 CT images sequences with normal and pathological renal tissues are used to evaluate the accuracy and effectiveness of our method. The experimental results reveal that the average dice similarity coefficient of these image sequences is from 92.37% to 95.71% and the corresponding standard deviation for each dataset is from 2.18% to 3.87%. In addition, the average automatic segmentation time for one kidney in each slice is about 0.36 s. Integrating the graph-cuts-based active contours model with contextual continuity, the algorithm takes advantages of energy minimization and the characteristics of image sequences. The proposed method achieves effective results for kidney segmentation in CT sequences.
Graph mining for next generation sequencing: leveraging the assembly graph for biological insights.
Warnke-Sommer, Julia; Ali, Hesham
2016-05-06
The assembly of Next Generation Sequencing (NGS) reads remains a challenging task. This is especially true for the assembly of metagenomics data that originate from environmental samples potentially containing hundreds to thousands of unique species. The principle objective of current assembly tools is to assemble NGS reads into contiguous stretches of sequence called contigs while maximizing for both accuracy and contig length. The end goal of this process is to produce longer contigs with the major focus being on assembly only. Sequence read assembly is an aggregative process, during which read overlap relationship information is lost as reads are merged into longer sequences or contigs. The assembly graph is information rich and capable of capturing the genomic architecture of an input read data set. We have developed a novel hybrid graph in which nodes represent sequence regions at different levels of granularity. This model, utilized in the assembly and analysis pipeline Focus, presents a concise yet feature rich view of a given input data set, allowing for the extraction of biologically relevant graph structures for graph mining purposes. Focus was used to create hybrid graphs to model metagenomics data sets obtained from the gut microbiomes of five individuals with Crohn's disease and eight healthy individuals. Repetitive and mobile genetic elements are found to be associated with hybrid graph structure. Using graph mining techniques, a comparative study of the Crohn's disease and healthy data sets was conducted with focus on antibiotics resistance genes associated with transposase genes. Results demonstrated significant differences in the phylogenetic distribution of categories of antibiotics resistance genes in the healthy and diseased patients. Focus was also evaluated as a pure assembly tool and produced excellent results when compared against the Meta-velvet, Omega, and UD-IDBA assemblers. Mining the hybrid graph can reveal biological phenomena captured by its structure. We demonstrate the advantages of considering assembly graphs as data-mining support in addition to their role as frameworks for assembly.
NASA Astrophysics Data System (ADS)
Holme, Petter; Saramäki, Jari
2012-10-01
A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology-rather, we want to make papers readable across disciplines.
Agudelo, M; Rodriguez, C A; Pelaez, C A; Vesga, O
2014-01-01
Several studies with animal models have demonstrated that bioequivalence of generic products of antibiotics like vancomycin, as currently defined, do not guarantee therapeutic equivalence. However, the amounts and characteristics of impurities and degradation products in these formulations do not violate the requirements of the U.S. Pharmacopeia (USP). Here, we provide experimental data with three generic products of meropenem that help in understanding how these apparently insignificant chemical differences affect the in vivo efficacy. Meropenem generics were compared with the innovator in vitro by microbiological assay, susceptibility testing, and liquid chromatography/mass spectrometry (LC/MS) analysis and in vivo with the neutropenic guinea pig soleus infection model (Pseudomonas aeruginosa) and the neutropenic mouse thigh (P. aeruginosa), brain (P. aeruginosa), and lung (Klebisella pneumoniae) infection models, adding the dihydropeptidase I (DHP-I) inhibitor cilastatin in different proportions to the carbapenem. We found that the concentration and potency of the active pharmaceutical ingredient, in vitro susceptibility testing, and mouse pharmacokinetics were identical for all products; however, two generics differed significantly from the innovator in the guinea pig and mouse models, while the third generic was therapeutically equivalent under all conditions. Trisodium adducts in a bioequivalent generic made it more susceptible to DHP-I hydrolysis and less stable at room temperature, explaining its therapeutic nonequivalence. We conclude that the therapeutic nonequivalence of generic products of meropenem is due to greater susceptibility to DHP-I hydrolysis. These failing generics are compliant with USP requirements and would remain undetectable under current regulations.
Competitive epidemic spreading over arbitrary multilayer networks.
Darabi Sahneh, Faryad; Scoglio, Caterina
2014-06-01
This study extends the Susceptible-Infected-Susceptible (SIS) epidemic model for single-virus propagation over an arbitrary graph to an Susceptible-Infected by virus 1-Susceptible-Infected by virus 2-Susceptible (SI_{1}SI_{2}S) epidemic model of two exclusive, competitive viruses over a two-layer network with generic structure, where network layers represent the distinct transmission routes of the viruses. We find analytical expressions determining extinction, coexistence, and absolute dominance of the viruses after we introduce the concepts of survival threshold and absolute-dominance threshold. The main outcome of our analysis is the discovery and proof of a region for long-term coexistence of competitive viruses in nontrivial multilayer networks. We show coexistence is impossible if network layers are identical yet possible if network layers are distinct. Not only do we rigorously prove a region of coexistence, but we can quantitate it via interrelation of central nodes across the network layers. Little to no overlapping of the layers' central nodes is the key determinant of coexistence. For example, we show both analytically and numerically that positive correlation of network layers makes it difficult for a virus to survive, while in a network with negatively correlated layers, survival is easier, but total removal of the other virus is more difficult.
Leveraging percolation theory to single out influential spreaders in networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Castellano, Claudio
2016-06-01
Among the consequences of the disordered interaction topology underlying many social, technological, and biological systems, a particularly important one is that some nodes, just because of their position in the network, may have a disproportionate effect on dynamical processes mediated by the complex interaction pattern. For example, the early adoption of a commercial product by an opinion leader in a social network may change its fate or just a few superspreaders may determine the virality of a meme in social media. Despite many recent efforts, the formulation of an accurate method to optimally identify influential nodes in complex network topologies remains an unsolved challenge. Here, we present the exact solution of the problem for the specific, but highly relevant, case of the susceptible-infected-removed (SIR) model for epidemic spreading at criticality. By exploiting the mapping between bond percolation and the static properties of the SIR model, we prove that the recently introduced nonbacktracking centrality is the optimal criterion for the identification of influential spreaders in locally tree-like networks at criticality. By means of simulations on synthetic networks and on a very extensive set of real-world networks, we show that the nonbacktracking centrality is a highly reliable metric to identify top influential spreaders also in generic graphs not embedded in space and for noncritical spreading.
New Graph Models and Algorithms for Detecting Salient Structures from Cluttered Images
2010-02-24
Development of graph models and algorithms to detect boundaries that show certain levels of symmetry, an important geometric property of many...Bookstein. Morphometric tools for landmark data. Cambridge University Press, 1991. [8] F. L. Bookstein. Principal warps: Thin-plate splines and the
Leveraging consumer's behaviour to promote generic drugs in Italy.
Zerbini, Cristina; Luceri, Beatrice; Vergura, Donata Tania
2017-04-01
The aim of this study was to fill the lack of knowledge regarding a more grounded exploration of the consumer's decision-making process in the context of generic drugs. In this perspective, a model, within the theoretical framework of the Theory of Planned Behaviour (TPB), for studying the consumers' purchase intention of generic drugs was developed. An online survey on 2,222 Italian people who bought drugs in the past was conducted. The proposed model was tested through structural equation modelling (SEM). Almost all the constructs considered in the model, except the perceived behavioural control, contribute to explain the consumer's purchase intention of generic drugs, after controlling for demographic variables (age, income, education). Specifically, attitude, subjective norm, past behaviour, self-identity and trust in the pharmacist have a positive influence on the intention to buy generic drugs. On the contrary, perceived risk towards products and brand sensitivity act negatively. The results of the present study could be useful to public policy makers in developing effective policies and educational campaigns aimed at promoting generic drugs. Specifically, marketing efforts should be directed to inform consumers about the generic drugs' characteristics to mitigate the perceived risk towards these products and to raise awareness during their decision-making process. Copyright © 2017 Elsevier B.V. All rights reserved.
Volume simplicity constraint in the Engle-Livine-Pereira-Rovelli spin foam model
NASA Astrophysics Data System (ADS)
Bahr, Benjamin; Belov, Vadim
2018-04-01
We propose a quantum version of the quadratic volume simplicity constraint for the Engle-Livine-Pereira-Rovelli spin foam model. It relies on a formula for the volume of 4-dimensional polyhedra, depending on its bivectors and the knotting class of its boundary graph. While this leads to no further condition for the 4-simplex, the constraint becomes nontrivial for more complicated boundary graphs. We show that, in the semiclassical limit of the hypercuboidal graph, the constraint turns into the geometricity condition observed recently by several authors.
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Molecular clock on a neutral network.
Raval, Alpan
2007-09-28
The number of fixed mutations accumulated in an evolving population often displays a variance that is significantly larger than the mean (the overdispersed molecular clock). By examining a generic evolutionary process on a neutral network of high-fitness genotypes, we establish a formalism for computing all cumulants of the full probability distribution of accumulated mutations in terms of graph properties of the neutral network, and use the formalism to prove overdispersion of the molecular clock. We further show that significant overdispersion arises naturally in evolution when the neutral network is highly sparse, exhibits large global fluctuations in neutrality, and small local fluctuations in neutrality. The results are also relevant for elucidating aspects of neutral network topology from empirical measurements of the substitution process.
Molecular Clock on a Neutral Network
NASA Astrophysics Data System (ADS)
Raval, Alpan
2007-09-01
The number of fixed mutations accumulated in an evolving population often displays a variance that is significantly larger than the mean (the overdispersed molecular clock). By examining a generic evolutionary process on a neutral network of high-fitness genotypes, we establish a formalism for computing all cumulants of the full probability distribution of accumulated mutations in terms of graph properties of the neutral network, and use the formalism to prove overdispersion of the molecular clock. We further show that significant overdispersion arises naturally in evolution when the neutral network is highly sparse, exhibits large global fluctuations in neutrality, and small local fluctuations in neutrality. The results are also relevant for elucidating aspects of neutral network topology from empirical measurements of the substitution process.
Computing Fourier integral operators with caustics
NASA Astrophysics Data System (ADS)
Caday, Peter
2016-12-01
Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.
Evaluating approaches to find exon chains based on long reads.
Kuosmanen, Anna; Norri, Tuukka; Mäkinen, Veli
2018-05-01
Transcript prediction can be modeled as a graph problem where exons are modeled as nodes and reads spanning two or more exons are modeled as exon chains. Pacific Biosciences third-generation sequencing technology produces significantly longer reads than earlier second-generation sequencing technologies, which gives valuable information about longer exon chains in a graph. However, with the high error rates of third-generation sequencing, aligning long reads correctly around the splice sites is a challenging task. Incorrect alignments lead to spurious nodes and arcs in the graph, which in turn lead to incorrect transcript predictions. We survey several approaches to find the exon chains corresponding to long reads in a splicing graph, and experimentally study the performance of these methods using simulated data to allow for sensitivity/precision analysis. Our experiments show that short reads from second-generation sequencing can be used to significantly improve exon chain correctness either by error-correcting the long reads before splicing graph creation, or by using them to create a splicing graph on which the long-read alignments are then projected. We also study the memory and time consumption of various modules, and show that accurate exon chains lead to significantly increased transcript prediction accuracy. The simulated data and in-house scripts used for this article are available at http://www.cs.helsinki.fi/group/gsa/exon-chains/exon-chains-bib.tar.bz2.
Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David
2018-02-01
To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve
2017-12-01
In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.
Computing Strongly Connected Components in the Streaming Model
NASA Astrophysics Data System (ADS)
Laura, Luigi; Santaroni, Federico
In this paper we present the first algorithm to compute the Strongly Connected Components of a graph in the datastream model (W-Stream), where the graph is represented by a stream of edges and we are allowed to produce intermediate output streams. The algorithm is simple, effective, and can be implemented with few lines of code: it looks at each edge in the stream, and selects the appropriate action with respect to a tree T, representing the graph connectivity seen so far. We analyze the theoretical properties of the algorithm: correctness, memory occupation (O(n logn)), per item processing time (bounded by the current height of T), and number of passes (bounded by the maximal height of T). We conclude by presenting a brief experimental evaluation of the algorithm against massive synthetic and real graphs that confirms its effectiveness: with graphs with up to 100M nodes and 4G edges, only few passes are needed, and millions of edges per second are processed.
USDA-ARS?s Scientific Manuscript database
This study introduces a simple generic model, the Generic Pest Forecast System (GPFS), for simulatingthe relative populations of non-indigenousarthropod pests in space and time. The model was designed to calculate the population index or relative population using hourly weather dataas influenced by...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey
In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps tomore » understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.« less
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
The bilinear-biquadratic model on the complete graph
NASA Astrophysics Data System (ADS)
Jakab, Dávid; Szirmai, Gergely; Zimborás, Zoltán
2018-03-01
We study the spin-1 bilinear-biquadratic model on the complete graph of N sites, i.e. when each spin is interacting with every other spin with the same strength. Because of its complete permutation invariance, this Hamiltonian can be rewritten as the linear combination of the quadratic Casimir operators of \
Graph theory applied to noise and vibration control in statistical energy analysis models.
Guasch, Oriol; Cortés, Lluís
2009-06-01
A fundamental aspect of noise and vibration control in statistical energy analysis (SEA) models consists in first identifying and then reducing the energy flow paths between subsystems. In this work, it is proposed to make use of some results from graph theory to address both issues. On the one hand, linear and path algebras applied to adjacency matrices of SEA graphs are used to determine the existence of any order paths between subsystems, counting and labeling them, finding extremal paths, or determining the power flow contributions from groups of paths. On the other hand, a strategy is presented that makes use of graph cut algorithms to reduce the energy flow from a source subsystem to a receiver one, modifying as few internal and coupling loss factors as possible.
Numerical simulation of electron scattering by nanotube junctions
NASA Astrophysics Data System (ADS)
Brüning, J.; Grikurov, V. E.
2008-03-01
We demonstrate the possibility of computing the intensity of electronic transport through various junctions of three-dimensional metallic nanotubes. In particular, we observe that the magnetic field can be used to control the switch of electron in Y-type junctions. Keeping in mind the asymptotic modeling of reliable nanostructures by quantum graphs, we conjecture that the scattering matrix of the graph should be the same as the scattering matrix of its nanosize-prototype. The numerical computation of the latter gives a method for determining the "gluing" conditions at a graph. Exploring this conjecture, we show that the Kirchhoff conditions (which are commonly used on graphs) cannot be applied to model reliable junctions. This work is a natural extension of the paper [1], but it is written in a self-consistent manner.
Simulator for heterogeneous dataflow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
1993-01-01
A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
Directed Laplacians For Fuzzy Autocatalytic Set Of Fuzzy Graph Type-3 Of An Incineration Process
NASA Astrophysics Data System (ADS)
Ahmad, Tahir; Baharun, Sabariah; Bakar, Sumarni Abu
2010-11-01
Fuzzy Autocatalytic Set (FACS) of Fuzzy Graph Type-3 was used in the modeling of a clinical waste incineration process in Malacca. FACS provided more accurate explanations of the incineration process than using crisp graph. In this paper we explore further FACS. Directed and combinatorial Laplacian of FACS are developed and their basic properties are presented.
Coloring geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Percus, Allon; Muller, Tobias
We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less
Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.
Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu
2016-11-01
In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.
On Bipartite Graphs Trees and Their Partial Vertex Covers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caskurlu, Bugra; Mkrtchyan, Vahan; Parekh, Ojas D.
2015-03-01
Graphs can be used to model risk management in various systems. Particularly, Caskurlu et al. in [7] have considered a system, which has threats, vulnerabilities and assets, and which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. One can either restricting the permissions of the users, or encapsulating the system assets. The pointed out two strategies correspond to deleting minimum number of elements corresponding to vulnerabilities and assets, such that the flow between threats and assets is reduced below the predefined threshold level. Itmore » can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the Vertex Cover problem is in P on bipartite graphs, however; the computational complexity of the Partial Vertex Cover problem on bipartite graphs has remained open. In this paper, we establish that the Partial Vertex Cover problem is NP-hard on bipartite graphs, which was also recently independently demonstrated [N. Apollonio and B. Simeone, Discrete Appl. Math., 165 (2014), pp. 37–48; G. Joret and A. Vetta, preprint, arXiv:1211.4853v1 [cs.DS], 2012]. We then identify interesting special cases of bipartite graphs, for which the Partial Vertex Cover problem, the closely related Budgeted Maximum Coverage problem, and their weighted extensions can be solved in polynomial time. We also present an 8/9-approximation algorithm for the Budgeted Maximum Coverage problem in the class of bipartite graphs. We show that this matches and resolves the integrality gap of the natural LP relaxation of the problem and improves upon a recent 4/5-approximation.« less
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Graphing evolutionary pattern and process: a history of techniques in archaeology and paleobiology.
Lyman, R Lee
2009-02-01
Graphs displaying evolutionary patterns are common in paleontology and in United States archaeology. Both disciplines subscribed to a transformational theory of evolution and graphed evolution as a sequence of archetypes in the late nineteenth and early twentieth centuries. U.S. archaeologists in the second decade of the twentieth century, and paleontologists shortly thereafter, developed distinct graphic styles that reflected the Darwinian variational model of evolution. Paleobiologists adopted the view of a species as a set of phenotypically variant individuals and graphed those variations either as central tendencies or as histograms of frequencies of variants. Archaeologists presumed their artifact types reflected cultural norms of prehistoric artisans and the frequency of specimens in each type reflected human choice and type popularity. They graphed cultural evolution as shifts in frequencies of specimens representing each of several artifact types. Confusion of pattern and process is exemplified by a paleobiologist misinterpreting the process illustrated by an archaeological graph, and an archaeologist misinterpreting the process illustrated by a paleobiological graph. Each style of graph displays particular evolutionary patterns and implies particular evolutionary processes. Graphs of a multistratum collection of prehistoric mammal remains and a multistratum collection of artifacts demonstrate that many graph styles can be used for both kinds of collections.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Multi-A Graph Patrolling and Partitioning
NASA Astrophysics Data System (ADS)
Elor, Y.; Bruckstein, A. M.
2012-12-01
We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.
Analyzing locomotion synthesis with feature-based motion graphs.
Mahmudi, Mentar; Kallmann, Marcelo
2013-05-01
We propose feature-based motion graphs for realistic locomotion synthesis among obstacles. Among several advantages, feature-based motion graphs achieve improved results in search queries, eliminate the need of postprocessing for foot skating removal, and reduce the computational requirements in comparison to traditional motion graphs. Our contributions are threefold. First, we show that choosing transitions based on relevant features significantly reduces graph construction time and leads to improved search performances. Second, we employ a fast channel search method that confines the motion graph search to a free channel with guaranteed clearance among obstacles, achieving faster and improved results that avoid expensive collision checking. Lastly, we present a motion deformation model based on Inverse Kinematics applied over the transitions of a solution branch. Each transition is assigned a continuous deformation range that does not exceed the original transition cost threshold specified by the user for the graph construction. The obtained deformation improves the reachability of the feature-based motion graph and in turn also reduces the time spent during search. The results obtained by the proposed methods are evaluated and quantified, and they demonstrate significant improvements in comparison to traditional motion graph techniques.
Information Dynamics in Networks: Models and Algorithms
2016-09-13
Twitter ; we investigated how to detect spam accounts on Facebook and other social networks by graph analytics; and finally we investigated how to design...networks. We investigated the appropriateness of existing mathematical models for explaining the structure of retweet cascades on Twitter ; we investigated...Received Paper 1.00 2.00 3.00 . A Note on Modeling Retweet Cascades on Twitter , Workshop on Algorithms and Models for the Web Graph. 09-DEC-15
A topo-graph model for indistinct target boundary definition from anatomical images.
Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Gong, Guanzhong; Eberl, Stefan; Yin, Yong; Wang, Lisheng; Feng, Dagan; Fulham, Michael
2018-06-01
It can be challenging to delineate the target object in anatomical imaging when the object boundaries are difficult to discern due to the low contrast or overlapping intensity distributions from adjacent tissues. We propose a topo-graph model to address this issue. The first step is to extract a topographic representation that reflects multiple levels of topographic information in an input image. We then define two types of node connections - nesting branches (NBs) and geodesic edges (GEs). NBs connect nodes corresponding to initial topographic regions and GEs link the nodes at a detailed level. The weights for NBs are defined to measure the similarity of regional appearance, and weights for GEs are defined with geodesic and local constraints. NBs contribute to the separation of topographic regions and the GEs assist the delineation of uncertain boundaries. Final segmentation is achieved by calculating the relevance of the unlabeled nodes to the labels by the optimization of a graph-based energy function. We test our model on 47 low contrast CT studies of patients with non-small cell lung cancer (NSCLC), 10 contrast-enhanced CT liver cases and 50 breast and abdominal ultrasound images. The validation criteria are the Dice's similarity coefficient and the Hausdorff distance. Student's t-test show that our model outperformed the graph models with pixel-only, pixel and regional, neighboring and radial connections (p-values <0.05). Our findings show that the topographic representation and topo-graph model provides improved delineation and separation of objects from adjacent tissues compared to the tested models. Copyright © 2018 Elsevier B.V. All rights reserved.
Using a high-dimensional graph of semantic space to model relationships among words
Jackson, Alice F.; Bolger, Donald J.
2014-01-01
The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD). PMID:24860525
Using a high-dimensional graph of semantic space to model relationships among words.
Jackson, Alice F; Bolger, Donald J
2014-01-01
The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD).
Functional network organization of the human brain
Power, Jonathan D; Cohen, Alexander L; Nelson, Steven M; Wig, Gagan S; Barnes, Kelly Anne; Church, Jessica A; Vogel, Alecia C; Laumann, Timothy O; Miezin, Fran M; Schlaggar, Bradley L; Petersen, Steven E
2011-01-01
Summary Real-world complex systems may be mathematically modeled as graphs, revealing properties of the system. Here we study graphs of functional brain organization in healthy adults using resting state functional connectivity MRI. We propose two novel brain-wide graphs, one of 264 putative functional areas, the other a modification of voxelwise networks that eliminates potentially artificial short-distance relationships. These graphs contain many subgraphs in good agreement with known functional brain systems. Other subgraphs lack established functional identities; we suggest possible functional characteristics for these subgraphs. Further, graph measures of the areal network indicate that the default mode subgraph shares network properties with sensory and motor subgraphs: it is internally integrated but isolated from other subgraphs, much like a “processing” system. The modified voxelwise graph also reveals spatial motifs in the patterning of systems across the cortex. PMID:22099467
Ivanciuc, Ovidiu
2013-06-01
Chemical and molecular graphs have fundamental applications in chemoinformatics, quantitative structureproperty relationships (QSPR), quantitative structure-activity relationships (QSAR), virtual screening of chemical libraries, and computational drug design. Chemoinformatics applications of graphs include chemical structure representation and coding, database search and retrieval, and physicochemical property prediction. QSPR, QSAR and virtual screening are based on the structure-property principle, which states that the physicochemical and biological properties of chemical compounds can be predicted from their chemical structure. Such structure-property correlations are usually developed from topological indices and fingerprints computed from the molecular graph and from molecular descriptors computed from the three-dimensional chemical structure. We present here a selection of the most important graph descriptors and topological indices, including molecular matrices, graph spectra, spectral moments, graph polynomials, and vertex topological indices. These graph descriptors are used to define several topological indices based on molecular connectivity, graph distance, reciprocal distance, distance-degree, distance-valency, spectra, polynomials, and information theory concepts. The molecular descriptors and topological indices can be developed with a more general approach, based on molecular graph operators, which define a family of graph indices related by a common formula. Graph descriptors and topological indices for molecules containing heteroatoms and multiple bonds are computed with weighting schemes based on atomic properties, such as the atomic number, covalent radius, or electronegativity. The correlation in QSPR and QSAR models can be improved by optimizing some parameters in the formula of topological indices, as demonstrated for structural descriptors based on atomic connectivity and graph distance.
Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo
2015-04-01
By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.
Graph-Based Object Class Discovery
NASA Astrophysics Data System (ADS)
Xia, Shengping; Hancock, Edwin R.
We are interested in the problem of discovering the set of object classes present in a database of images using a weakly supervised graph-based framework. Rather than making use of the ”Bag-of-Features (BoF)” approach widely used in current work on object recognition, we represent each image by a graph using a group of selected local invariant features. Using local feature matching and iterative Procrustes alignment, we perform graph matching and compute a similarity measure. Borrowing the idea of query expansion , we develop a similarity propagation based graph clustering (SPGC) method. Using this method class specific clusters of the graphs can be obtained. Such a cluster can be generally represented by using a higher level graph model whose vertices are the clustered graphs, and the edge weights are determined by the pairwise similarity measure. Experiments are performed on a dataset, in which the number of images increases from 1 to 50K and the number of objects increases from 1 to over 500. Some objects have been discovered with total recall and a precision 1 in a single cluster.
Graph-based structural change detection for rotating machinery monitoring
NASA Astrophysics Data System (ADS)
Lu, Guoliang; Liu, Jie; Yan, Peng
2018-01-01
Detection of structural changes is critically important in operational monitoring of a rotating machine. This paper presents a novel framework for this purpose, where a graph model for data modeling is adopted to represent/capture statistical dynamics in machine operations. Meanwhile we develop a numerical method for computing temporal anomalies in the constructed graphs. The martingale-test method is employed for the change detection when making decisions on possible structural changes, where excellent performance is demonstrated outperforming exciting results such as the autoregressive-integrated-moving average (ARIMA) model. Comprehensive experimental results indicate good potentials of the proposed algorithm in various engineering applications. This work is an extension of a recent result (Lu et al., 2017).
Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan
2016-01-01
Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less
Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.
NASA Astrophysics Data System (ADS)
Stallard, A. P.; Smith, S. R.; Elya, J. L.
2016-12-01
The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
An alternative database approach for management of SNOMED CT and improved patient data queries.
Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R
2015-10-01
SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hoffler, Keith D.; Fears, Scott P.; Carzoo, Susan W.
1997-01-01
A generic airplane model concept was developed to allow configurations with various agility, performance, handling qualities, and pilot vehicle interface to be generated rapidly for piloted simulation studies. The simple concept allows stick shaping and various stick command types or modes to drive an airplane with both linear and nonlinear components. Output from the stick shaping goes to linear models or a series of linear models that can represent an entire flight envelope. The generic model also has provisions for control power limitations, a nonlinear feature. Therefore, departures from controlled flight are possible. Note that only loss of control is modeled, the generic airplane does not accurately model post departure phenomenon. The model concept is presented herein, along with four example airplanes. Agility was varied across the four example airplanes without altering specific excess energy or significantly altering handling qualities. A new feedback scheme to provide angle-of-attack cueing to the pilot, while using a pitch rate command system, was implemented and tested.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
Mathematics of Web science: structure, dynamics and incentives.
Chayes, Jennifer
2013-03-28
Dr Chayes' talk described how, to a discrete mathematician, 'all the world's a graph, and all the people and domains merely vertices'. A graph is represented as a set of vertices V and a set of edges E, so that, for instance, in the World Wide Web, V is the set of pages and E the directed hyperlinks; in a social network, V is the people and E the set of relationships; and in the autonomous system Internet, V is the set of autonomous systems (such as AOL, Yahoo! and MSN) and E the set of connections. This means that mathematics can be used to study the Web (and other large graphs in the online world) in the following way: first, we can model online networks as large finite graphs; second, we can sample pieces of these graphs; third, we can understand and then control processes on these graphs; and fourth, we can develop algorithms for these graphs and apply them to improve the online experience.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Visweswara Sathanur, Arun; Choudhury, Sutanay; Joslyn, Cliff A.
Property graphs can be used to represent heterogeneous networks with attributed vertices and edges. Given one property graph, simulating another graph with same or greater size with identical statistical properties with respect to the attributes and connectivity is critical for privacy preservation and benchmarking purposes. In this work we tackle the problem of capturing the statistical dependence of the edge connectivity on the vertex labels and using the same distribution to regenerate property graphs of the same or expanded size in a scalable manner. However, accurate simulation becomes a challenge when the attributes do not completely explain the network structure.more » We propose the Property Graph Model (PGM) approach that uses an attribute (or label) augmentation strategy to mitigate the problem and preserve the graph connectivity as measured via degree distribution, vertex label distributions and edge connectivity. Our proposed algorithm is scalable with a linear complexity in the number of edges in the target graph. We illustrate the efficacy of the PGM approach in regenerating and expanding the datasets by leveraging two distinct illustrations.« less
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
Highly Asynchronous VisitOr Queue Graph Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, R.
2012-10-01
HAVOQGT is a C++ framework that can be used to create highly parallel graph traversal algorithms. The framework stores the graph and algorithmic data structures on external memory that is typically mapped to high performance locally attached NAND FLASH arrays. The framework supports a vertex-centered visitor programming model. The frameworkd has been used to implement breadth first search, connected components, and single source shortest path.
Varni, James W; Limbers, Christine A; Newman, Daniel A; Seid, Michael
2008-11-01
The measurement of health-related quality of life (HRQOL) in pediatric medicine and health services research has grown significantly over the past decade. The paradigm shift toward patient-reported outcomes (PROs) has provided the opportunity to emphasize the value and critical need for pediatric patient self-report. In order for changes in HRQOL/PRO outcomes to be meaningful over time, it is essential to demonstrate longitudinal factorial invariance. This study examined the longitudinal factor structure of the PedsQL 4.0 Generic Core Scales over a one-year period for child self-report ages 5-17 in 2,887 children from a statewide evaluation of the California State Children's Health Insurance Program (SCHIP) utilizing a structural equation modeling framework. Specifying four- and five-factor measurement models, longitudinal structural equation modeling was used to compare factor structures over a one-year interval on the PedsQL 4.0 Generic Core Scales. While the four-factor conceptually-derived measurement model for the PedsQL 4.0 Generic Core Scales produced an acceptable fit, the five-factor empirically-derived measurement model from the initial field test of the PedsQL 4.0 Generic Core Scales produced a marginally superior fit in comparison to the four-factor model. For the five-factor measurement model, the best fitting model, strict factorial invariance of the PedsQL 4.0 Generic Core Scales across the two measurement occasions was supported by the stability of the comparative fit index between the unconstrained and constrained models, and several additional indices of practical fit including the root mean squared error of approximation, the non-normed fit index, and the parsimony normed fit index. The findings support an equivalent factor structure on the PedsQL 4.0 Generic Core Scales over time. Based on these data, it can be concluded that over a one-year period children in our study interpreted items on the PedsQL 4.0 Generic Core Scales in a similar manner.
Boore, David
2016-01-01
This short note contains two contributions related to deriving depth‐dependent velocity and density models for use in computing generic crustal amplifications. The first contribution is a method for interpolating two velocity profiles to obtain a third profile with a time‐averaged velocity to depth Z that is equal to a specified value (e.g., for shear‐wave velocity VS, for Z=30 m, in which the subscript S has been added to indicate that the average is for shear‐wave velocities). The second contribution is a procedure for obtaining densities from VS. The first contribution is used to extend and revise the Boore and Joyner (1997) generic rock VS model, for which , to a model with the more common . This new model is then used with the densities from the second contribution to compute crustal amplifications for a generic site with .
NASA Astrophysics Data System (ADS)
Zirconia, A.; Supriyanti, F. M. T.; Supriatna, A.
2018-04-01
This study aims to determine generic science skills enhancement of students through implementation of IDEAL problem-solving model on genetic information course. Method of this research was mixed method, with pretest-posttest nonequivalent control group design. Subjects of this study were chemistry students enrolled in biochemistry course, consisted of 22 students in the experimental class and 19 students in control class. The instrument in this study was essayed involves 6 indicators generic science skills such as indirect observation, causality thinking, logical frame, self-consistent thinking, symbolic language, and developing concept. The results showed that genetic information course using IDEAL problem-solving model have been enhancing generic science skills in low category with
Weiser, Armin A; Gross, Stefan; Schielke, Anika; Wigger, Jan-Frederik; Ernert, Andrea; Adolphs, Julian; Fetsch, Alexandra; Müller-Graf, Christine; Käsbohrer, Annemarie; Mosbach-Schulz, Olaf; Appel, Bernd; Greiner, Matthias
2013-03-01
The Shiga toxin-producing Escherichia coli O104:H4 outbreak in Germany in 2011 required the development of appropriate tools in real-time for tracing suspicious foods along the supply chain, namely salad ingredients, sprouts, and seeds. Food commodities consumed at locations identified as most probable site of infection (outbreak clusters) were traced back in order to identify connections between different disease clusters via the supply chain of the foods. A newly developed relational database with integrated consistency and plausibility checks was used to collate these data for further analysis. Connections between suppliers, distributors, and producers were visualized in network graphs and geographic projections. Finally, this trace-back and trace-forward analysis led to the identification of sprouts produced by a horticultural farm in Lower Saxony as vehicle for the pathogen, and a specific lot of fenugreek seeds imported from Egypt as the most likely source of contamination. Network graphs have proven to be a powerful tool for summarizing and communicating complex trade relationships to various stake holders. The present article gives a detailed description of the newly developed tracing tools and recommendations for necessary requirements and improvements for future foodborne outbreak investigations.
Object Classification With Joint Projection and Low-Rank Dictionary Learning.
Foroughi, Homa; Ray, Nilanjan; Hong Zhang
2018-02-01
For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.
Volatility behavior of visibility graph EMD financial time series from Ising interacting system
NASA Astrophysics Data System (ADS)
Zhang, Bo; Wang, Jun; Fang, Wen
2015-08-01
A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).
Graph Theory and the High School Student.
ERIC Educational Resources Information Center
Chartrand, Gary; Wall, Curtiss E.
1980-01-01
Graph theory is presented as a tool to instruct high school mathematics students. A variety of real world problems can be modeled which help students recognize the importance and difficulty of applying mathematics. (MP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coram, Jamie L.; Morrow, James D.; Perkins, David Nikolaus
2015-09-01
This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using bothmore » geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.« less
NASA Astrophysics Data System (ADS)
Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke
2017-05-01
Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yakovlev, A. A.; Sorokin, V. S.; Mishustina, S. N.; Proidakova, N. V.; Postupaeva, S. G.
2017-01-01
The article describes a new method of search design of refrigerating systems, the basis of which is represented by a graph model of the physical operating principle based on thermodynamical description of physical processes. The mathematical model of the physical operating principle has been substantiated, and the basic abstract theorems relatively semantic load applied to nodes and edges of the graph have been represented. The necessity and the physical operating principle, sufficient for the given model and intended for the considered device class, were demonstrated by the example of a vapour-compression refrigerating plant. The example of obtaining a multitude of engineering solutions of a vapour-compression refrigerating plant has been considered.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Impact of alternative interventions on changes in generic dispensing rates.
O'Malley, A James; Frank, Richard G; Kaddis, Atheer; Rothenberg, Barbara M; McNeil, Barbara J
2006-10-01
To evaluate the effectiveness of four alternative interventions (member mailings, advertising campaigns, free generic drug samples to physicians, and physician financial incentives) used by a major health insurer to encourage its members to switch to generic drugs. Using claim-level data from Blue Cross Blue Shield of Michigan, we evaluated the success of four interventions implemented during 2000-2003 designed to increase the use of generic drugs among its members. Around 13 million claims involving seven important classes of drugs were used to assess the effectiveness of the interventions. For each intervention a control group was developed that most closely resembled the corresponding intervention group. Logistic regression models with interaction effects between the treatment group (intervention versus control) and the status of the intervention (active versus not active) were used to evaluate if the interventions had an effect on the generic dispensing rate (GDR). Because the mail order pharmacy was considered more aggressive at converting prescriptions to generics, separate generic purchasing models were fitted to retail and mail order claims. In secondary analyses separate models were also fitted to claims involving a new condition and claims refilled for preexisting conditions. The interventions did not appear to increase the market penetration of generic drugs for either retail or mail order claims, or for claims involving new or preexisting conditions. In addition, we found that the ratio of copayments for brand name to generic drugs had a large positive effect on the GDR. The interventions did not appear to directly influence the GDR. Financial incentives expressed to consumers through benefit designs have a large influence on their switching to generic drugs and on the less-costly mail-order mode of purchase.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
Brettin, Thomas S.; Cottingham, Robert W.; Griffith, Shelton D.; Quest, Daniel J.
2015-09-08
A system and method of integrating diverse sources of data and data streams is presented. The method can include selecting a scenario based on a topic, creating a multi-relational directed graph based on the scenario, identifying and converting resources in accordance with the scenario and updating the multi-directed graph based on the resources, identifying data feeds in accordance with the scenario and updating the multi-directed graph based on the data feeds, identifying analytical routines in accordance with the scenario and updating the multi-directed graph using the analytical routines and identifying data outputs in accordance with the scenario and defining queries to produce the data outputs from the multi-directed graph.
NASA Astrophysics Data System (ADS)
Buscema, Massimo; Asadi-Zeydabadi, Masoud; Lodwick, Weldon; Breda, Marco
2016-04-01
Significant applications such as the analysis of Alzheimer's disease differentiated from dementia, or in data mining of social media, or in extracting information of drug cartel structural composition, are often modeled as graphs. The structural or topological complexity or lack of it in a graph is quite often useful in understanding and more importantly, resolving the problem. We are proposing a new index we call the H0function to measure the structural/topological complexity of a graph. To do this, we introduce the concept of graph pruning and its associated algorithm that is used in the development of our measure. We illustrate the behavior of our measure, the H0 function, through different examples found in the appendix. These examples indicate that the H0 function contains information that is useful and important characteristics of a graph. Here, we restrict ourselves to undirected.
A Research Graph dataset for connecting research data repositories using RD-Switchboard.
Aryani, Amir; Poblet, Marta; Unsworth, Kathryn; Wang, Jingbo; Evans, Ben; Devaraju, Anusuriya; Hausstein, Brigitte; Klas, Claus-Peter; Zapilko, Benjamin; Kaplun, Samuele
2018-05-29
This paper describes the open access graph dataset that shows the connections between Dryad, CERN, ANDS and other international data repositories to publications and grants across multiple research data infrastructures. The graph dataset was created using the Research Graph data model and the Research Data Switchboard (RD-Switchboard), a collaborative project by the Research Data Alliance DDRI Working Group (DDRI WG) with the aim to discover and connect the related research datasets based on publication co-authorship or jointly funded grants. The graph dataset allows researchers to trace and follow the paths to understanding a body of work. By mapping the links between research datasets and related resources, the graph dataset improves both their discovery and visibility, while avoiding duplicate efforts in data creation. Ultimately, the linked datasets may spur novel ideas, facilitate reproducibility and re-use in new applications, stimulate combinatorial creativity, and foster collaborations across institutions.
Disease management research using event graphs.
Allore, H G; Schruben, L W
2000-08-01
Event Graphs, conditional representations of stochastic relationships between discrete events, simulate disease dynamics. In this paper, we demonstrate how Event Graphs, at an appropriate abstraction level, also extend and organize scientific knowledge about diseases. They can identify promising treatment strategies and directions for further research and provide enough detail for testing combinations of new medicines and interventions. Event Graphs can be enriched to incorporate and validate data and test new theories to reflect an expanding dynamic scientific knowledge base and establish performance criteria for the economic viability of new treatments. To illustrate, an Event Graph is developed for mastitis, a costly dairy cattle disease, for which extensive scientific literature exists. With only a modest amount of imagination, the methodology presented here can be seen to apply modeling to any disease, human, plant, or animal. The Event Graph simulation presented here is currently being used in research and in a new veterinary epidemiology course. Copyright 2000 Academic Press.
A system for routing arbitrary directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1987-01-01
There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.
Generic Surface-to-Air Missile Model.
1979-10-01
describes the Generic Surface-to-Air Missile Model (GENSAM) which evaluates the outcome of an engagement between a surface-to-air missile system and an...DETAILS OF THE GENERIC SAM MODEL 3-1 3.1 Coordinate Transformations 3-1 3.1.1 Coordinate Systems 3-1 3.1.2 Coordinate Transformations 3-4 3.1.3 Functions...Tracking Radars 3-54 3.3.11 Deception Jamming and Tracking Radars 3-55 3.3.12 Jaming and Track Radar Downlinks 3-56 3.3.13 Infrared Surveillance Systems 3
Exact and approximate graph matching using random walks.
Gori, Marco; Maggini, Marco; Sarti, Lorenzo
2005-07-01
In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.
Compound analysis via graph kernels incorporating chirality.
Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya
2010-12-01
High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2017-01-01
Knee joint kinematics derived from multi-body optimisation (MBO) still requires evaluation. The objective of this study was to corroborate model-derived kinematics of osteoarthritic knees obtained using four generic knee joint models used in musculoskeletal modelling - spherical, hinge, degree-of-freedom coupling curves and parallel mechanism - against reference knee kinematics measured by stereo-radiography. Root mean square errors ranged from 0.7° to 23.4° for knee rotations and from 0.6 to 9.0 mm for knee displacements. Model-derived knee kinematics computed from generic knee joint models was inaccurate. Future developments and experiments should improve the reliability of osteoarthritic knee models in MBO and musculoskeletal modelling.
Common Cause Failure Modeling: Aerospace Versus Nuclear
NASA Technical Reports Server (NTRS)
Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer
2010-01-01
Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.
Evolutionary Games of Multiplayer Cooperation on Graphs
Arranz, Jordi; Traulsen, Arne
2016-01-01
There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946
ERIC Educational Resources Information Center
Katz, Irvin R.; Xi, Xiaoming; Kim, Hyun-Joo; Cheng, Peter C. H.
2004-01-01
This research applied a cognitive model to identify item features that lead to irrelevant variance on the Test of Spoken English[TM] (TSE[R]). The TSE is an assessment of English oral proficiency and includes an item that elicits a description of a statistical graph. This item type sometimes appears to tap graph-reading skills--an irrelevant…
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.
2012-06-01
Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.
NASA Astrophysics Data System (ADS)
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
Suzuki, Noriyuki; Murasawa, Kaori; Sakurai, Takeo; Nansai, Keisuke; Matsuhashi, Keisuke; Moriguchi, Yuichi; Tanabe, Kiyoshi; Nakasugi, Osami; Morita, Masatoshi
2004-11-01
A spatially resolved and geo-referenced dynamic multimedia environmental fate model, G-CIEMS (Grid-Catchment Integrated Environmental Modeling System) was developed on a geographical information system (GIS). The case study for Japan based on the air grid cells of 5 x 5 km resolution and catchments with an average area of 9.3 km2, which corresponds to about 40,000 air grid cells and 38,000 river segments/catchment polygons, were performed for dioxins, benzene, 1,3-butadiene, and di-(2-ethyhexyl)phthalate. The averaged concentration of the model and monitoring output were within a factor of 2-3 for all the media. Outputs from G-CIEMS and the generic model were essentially comparable when identical parameters were employed, whereas the G-CIEMS model gave explicit information of distribution of chemicals in the environment. Exposure-weighted averaged concentrations (EWAC) in air were calculated to estimate the exposure ofthe population, based on the results of generic, G-CIEMS, and monitoring approaches. The G-CIEMS approach showed significantly better agreement with the monitoring-derived EWAC than the generic model approach. Implication for the use of a geo-referenced modeling approach in the risk assessment scheme is discussed as a generic-spatial approach, which can be used to provide more accurate exposure estimation with distribution information, using generally available data sources for a wide range of chemicals.
A graph model for preventing railway accidents based on the maximal information coefficient
NASA Astrophysics Data System (ADS)
Shao, Fubo; Li, Keping
2017-01-01
A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.
Multiscale weighted colored graphs for protein flexibility and rigidity analysis
NASA Astrophysics Data System (ADS)
Bramer, David; Wei, Guo-Wei
2018-02-01
Protein structural fluctuation, measured by Debye-Waller factors or B-factors, is known to correlate to protein flexibility and function. A variety of methods has been developed for protein Debye-Waller factor prediction and related applications to domain separation, docking pose ranking, entropy calculation, hinge detection, stability analysis, etc. Nevertheless, none of the current methodologies are able to deliver an accuracy of 0.7 in terms of the Pearson correlation coefficients averaged over a large set of proteins. In this work, we introduce a paradigm-shifting geometric graph model, multiscale weighted colored graph (MWCG), to provide a new generation of computational algorithms to significantly change the current status of protein structural fluctuation analysis. Our MWCG model divides a protein graph into multiple subgraphs based on interaction types between graph nodes and represents the protein rigidity by generalized centralities of subgraphs. MWCGs not only predict the B-factors of protein residues but also accurately analyze the flexibility of all atoms in a protein. The MWCG model is validated over a number of protein test sets and compared with many standard methods. An extensive numerical study indicates that the proposed MWCG offers an accuracy of over 0.8 and thus provides perhaps the first reliable method for estimating protein flexibility and B-factors. It also simultaneously predicts all-atom flexibility in a molecule.
A Model of Knowledge Based Information Retrieval with Hierarchical Concept Graph.
ERIC Educational Resources Information Center
Kim, Young Whan; Kim, Jin H.
1990-01-01
Proposes a model of knowledge-based information retrieval (KBIR) that is based on a hierarchical concept graph (HCG) which shows relationships between index terms and constitutes a hierarchical thesaurus as a knowledge base. Conceptual distance between a query and an object is discussed and the use of Boolean operators is described. (25…
Evaluation of Teaching the IS-LM Model through a Simulation Program
ERIC Educational Resources Information Center
Pablo-Romero, Maria del Populo; Pozo-Barajas, Rafael; Gomez-Calero, Maria de la Palma
2012-01-01
The IS-ML model is a basic tool used in the teaching of short-term macroeconomics. Teaching is essentially done through the use of graphs. However, the way these graphs are traditionally taught does not allow the learner to easily visualise changes in the curves. The IS-LM simulation program overcomes difficulties encountered in understanding the…
NASA Astrophysics Data System (ADS)
Catanzaro, Michael J.; Chernyak, Vladimir Y.; Klein, John R.
2016-12-01
Driven Langevin processes have appeared in a variety of fields due to the relevance of natural phenomena having both deterministic and stochastic effects. The stochastic currents and fluxes in these systems provide a convenient set of observables to describe their non-equilibrium steady states. Here we consider stochastic motion of a (k - 1) -dimensional object, which sweeps out a k-dimensional trajectory, and gives rise to a higher k-dimensional current. By employing the low-temperature (low-noise) limit, we reduce the problem to a discrete Markov chain model on a CW complex, a topological construction which generalizes the notion of a graph. This reduction allows the mean fluxes and currents of the process to be expressed in terms of solutions to the discrete Supersymmetric Fokker-Planck (SFP) equation. Taking the adiabatic limit, we show that generic driving leads to rational quantization of the generated higher dimensional current. The latter is achieved by implementing the recently developed tools, coined the higher-dimensional Kirchhoff tree and co-tree theorems. This extends the study of motion of extended objects in the continuous setting performed in the prequel (Catanzaro et al.) to this manuscript.
Modeling heterogeneous processor scheduling for real time systems
NASA Technical Reports Server (NTRS)
Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.
1994-01-01
A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.
Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-12-01
Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.
NASA Astrophysics Data System (ADS)
Szyjka, Sebastian P.
The purpose of this study was to determine the extent to which six cognitive and attitudinal variables predicted pre-service elementary teachers' performance on line graphing. Predictors included Illinois teacher education basic skills sub-component scores in reading comprehension and mathematics, logical thinking performance scores, as well as measures of attitudes toward science, mathematics and graphing. This study also determined the strength of the relationship between each prospective predictor variable and the line graphing performance variable, as well as the extent to which measures of attitude towards science, mathematics and graphing mediated relationships between scores on mathematics, reading, logical thinking and line graphing. Ninety-four pre-service elementary education teachers enrolled in two different elementary science methods courses during the spring 2009 semester at Southern Illinois University Carbondale participated in this study. Each subject completed five different instruments designed to assess science, mathematics and graphing attitudes as well as logical thinking and graphing ability. Sixty subjects provided copies of primary basic skills score reports that listed subset scores for both reading comprehension and mathematics. The remaining scores were supplied by a faculty member who had access to a database from which the scores were drawn. Seven subjects, whose scores could not be found, were eliminated from final data analysis. Confirmatory factor analysis (CFA) was conducted in order to establish validity and reliability of the Questionnaire of Attitude Toward Line Graphs in Science (QALGS) instrument. CFA tested the statistical hypothesis that the five main factor structures within the Questionnaire of Attitude Toward Statistical Graphs (QASG) would be maintained in the revised QALGS. Stepwise Regression Analysis with backward elimination was conducted in order to generate a parsimonious and precise predictive model. This procedure allowed the researcher to explore the relationships among the affective and cognitive variables that were included in the regression analysis. The results for CFA indicated that the revised QALGS measure was sound in its psychometric properties when tested against the QASG. Reliability statistics indicated that the overall reliability for the 32 items in the QALGS was .90. The learning preferences construct had the lowest reliability (.67), while enjoyment (.89), confidence (.86) and usefulness (.77) constructs had moderate to high reliabilities. The first four measurement models fit the data well as indicated by the appropriate descriptive and statistical indices. However, the fifth measurement model did not fit the data well statistically, and only fit well with two descriptive indices. The results addressing the research question indicated that mathematical and logical thinking ability were significant predictors of line graph performance among the remaining group of variables. These predictors accounted for 41% of the total variability on the line graph performance variable. Partial correlation coefficients indicated that mathematics ability accounted for 20.5% of the variance on the line graphing performance variable when removing the effect of logical thinking. The logical thinking variable accounted for 4.7% of the variance on the line graphing performance variable when removing the effect of mathematics ability.
Yangian symmetry for bi-scalar loop amplitudes
NASA Astrophysics Data System (ADS)
Chicherin, Dmitry; Kazakov, Vladimir; Loebbert, Florian; Müller, Dennis; Zhong, De-liang
2018-05-01
We establish an all-loop conformal Yangian symmetry for the full set of planar amplitudes in the recently proposed integrable bi-scalar field theory in four dimensions. This chiral theory is a particular double scaling limit of γ-twisted weakly coupled N=4 SYM theory. Each amplitude with a certain order of scalar particles is given by a single fishnet Feynman graph of disc topology cut out of a regular square lattice. The Yangian can be realized by the action of a product of Lax operators with a specific sequence of inhomogeneity parameters on the boundary of the disc. Based on this observation, the Yangian generators of level one for generic bi-scalar amplitudes are explicitly constructed. Finally, we comment on the relation to the dual conformal symmetry of these scattering amplitudes.
On equivalent resistance of electrical circuits
NASA Astrophysics Data System (ADS)
Kagan, Mikhail
2015-01-01
While the standard (introductory physics) way of computing the equivalent resistance of nontrivial electrical circuits is based on Kirchhoff's rules, there is a mathematically and conceptually simpler approach, called the method of nodal potentials, whose basic variables are the values of the electric potential at the circuit's nodes. In this paper, we review the method of nodal potentials and illustrate it using the Wheatstone bridge as an example. We then derive a closed-form expression for the equivalent resistance of a generic circuit, which we apply to a few sample circuits. The result unveils a curious interplay between electrical circuits, matrix algebra, and graph theory and its applications to computer science. The paper is written at a level accessible by undergraduate students who are familiar with matrix arithmetic. Additional proofs and technical details are provided in appendices.
Results on Vertex Degree and K-Connectivity in Uniform S-Intersection Graphs
2014-01-01
distribution. A uniform s-intersection graph models the topology of a secure wireless sensor network employing the widely used s-composite key predistribution scheme. Our theoretical findings is also confirmed by numerical results.
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Bootstrapping Security Policies for Wearable Apps Using Attributed Structural Graphs.
González-Tablas, Ana I; Tapiador, Juan E
2016-05-11
We address the problem of bootstrapping security and privacy policies for newly-deployed apps in wireless body area networks (WBAN) composed of smartphones, sensors and other wearable devices. We introduce a framework to model such a WBAN as an undirected graph whose vertices correspond to devices, apps and app resources, while edges model structural relationships among them. This graph is then augmented with attributes capturing the features of each entity together with user-defined tags. We then adapt available graph-based similarity metrics to find the closest app to a new one to be deployed, with the aim of reusing, and possibly adapting, its security policy. We illustrate our approach through a detailed smartphone ecosystem case study. Our results suggest that the scheme can provide users with a reasonably good policy that is consistent with the user's security preferences implicitly captured by policies already in place.
Bootstrapping Security Policies for Wearable Apps Using Attributed Structural Graphs
González-Tablas, Ana I.; Tapiador, Juan E.
2016-01-01
We address the problem of bootstrapping security and privacy policies for newly-deployed apps in wireless body area networks (WBAN) composed of smartphones, sensors and other wearable devices. We introduce a framework to model such a WBAN as an undirected graph whose vertices correspond to devices, apps and app resources, while edges model structural relationships among them. This graph is then augmented with attributes capturing the features of each entity together with user-defined tags. We then adapt available graph-based similarity metrics to find the closest app to a new one to be deployed, with the aim of reusing, and possibly adapting, its security policy. We illustrate our approach through a detailed smartphone ecosystem case study. Our results suggest that the scheme can provide users with a reasonably good policy that is consistent with the user’s security preferences implicitly captured by policies already in place. PMID:27187385
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Tattevin, P.; Saleh-Mghir, A.; Davido, B.; Ghout, I.; Massias, L.; Garcia de la Maria, C.; Miró, J. M.; Perronne, C.; Laurent, F.
2013-01-01
Concerns have recently emerged about the potency and the quality of generic vancomycin (VAN) products approved for use in humans, based on experiments in a neutropenic mouse thigh infection model. However, other animal models may be more appropriate to decipher the bactericidal activities of VAN generics in vivo and to predict their efficacy in humans. We aimed to compare the bactericidal activities of six generic VAN products currently used in France (Mylan and Sandoz), Spain (Hospira), Switzerland (Teva), and the United States (Akorn-Strides and American Pharmaceutical Products [APP]) in a rabbit model of aortic valve endocarditis induced by 8 × 107 CFU of methicillin-resistant Staphylococcus aureus (MRSA) strain COL (VAN MIC, 1.5 μg/ml). In vitro, there were no significant differences in the time-kill curve studies performed with the six generic VAN products. Ten rabbits in each group were treated with intravenous (i.v.) VAN, 60 mg/kg of body weight twice a day (b.i.d.) for 4 days. Mean peak serum VAN levels, measured 45 min after the last injection, ranged from 35.5 (APP) to 45.9 μg/ml (Teva). Mean trough serum VAN levels, measured 12 h after the last injection, ranged from 2.3 (Hospira) to 9.2 (APP) μg/ml. All generic VAN products were superior to controls (no treatment) in terms of residual organisms in vegetations (P < 0.02 for each comparison) and in the spleen (P < 0.005 for each comparison). Pairwise comparisons of generic VAN products found no significant differences. In conclusion, a stringent MRSA endocarditis model found no significant differences in the bactericidal activities of six generic VAN products currently used in Europe and America. PMID:23254435
NASA Astrophysics Data System (ADS)
Lee, Graham C. B.; Van Hoe, Bram; Yan, Zhijun; Maskery, Oliver; Sugden, Kate; Webb, David; Van Steenberge, Geert
2012-03-01
We present a compact, portable and low cost generic interrogation strain sensor system using a fibre Bragg grating configured in transmission mode with a vertical-cavity surface-emitting laser (VCSEL) light source and a GaAs photodetector embedded in a polymer skin. The photocurrent value is read and stored by a microcontroller. In addition, the photocurrent data is sent via Bluetooth to a computer or tablet device that can present the live data in a real time graph. With a matched grating and VCSEL, the system is able to automatically scan and lock the VCSEL to the most sensitive edge of the grating. Commercially available VCSEL and photodetector chips are thinned down to 20 μm and integrated in an ultra-thin flexible optical foil using several thin film deposition steps. A dedicated micro mirror plug is fabricated to couple the driving optoelectronics to the fibre sensors. The resulting optoelectronic package can be embedded in a thin, planar sensing sheet and the host material for this sheet is a flexible and stretchable polymer. The result is a fully embedded fibre sensing system - a photonic skin. Further investigations are currently being carried out to determine the stability and robustness of the embedded optoelectronic components.
Properties of heuristic search strategies
NASA Technical Reports Server (NTRS)
Vanderbrug, G. J.
1973-01-01
A directed graph is used to model the search space of a state space representation with single input operators, an AND/OR is used for problem reduction representations, and a theorem proving graph is used for state space representations with multiple input operators. These three graph models and heuristic strategies for searching them are surveyed. The completeness, admissibility, and optimality properties of search strategies which use the evaluation function f = (1 - omega)g = omega(h) are presented and interpreted using a representation of the search process in the plane. The use of multiple output operators to imply dependent successors, and thus obtain a formalism which includes all three types of representations, is discussed.
Application of dynamic uncertain causality graph in spacecraft fault diagnosis: Logic cycle
NASA Astrophysics Data System (ADS)
Yao, Quanying; Zhang, Qin; Liu, Peng; Yang, Ping; Zhu, Ma; Wang, Xiaochen
2017-04-01
Intelligent diagnosis system are applied to fault diagnosis in spacecraft. Dynamic Uncertain Causality Graph (DUCG) is a new probability graphic model with many advantages. In the knowledge expression of spacecraft fault diagnosis, feedback among variables is frequently encountered, which may cause directed cyclic graphs (DCGs). Probabilistic graphical models (PGMs) such as bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning, but BN does not allow DCGs. In this paper, DUGG is applied to fault diagnosis in spacecraft: introducing the inference algorithm for the DUCG to deal with feedback. Now, DUCG has been tested in 16 typical faults with 100% diagnosis accuracy.
Cui, Meng; Yang, Shuo; Yu, Tong; Yang, Ce; Gao, Yonghong; Zhu, Haiyan
2013-10-01
To design a model to capture information on the state and trends of knowledge creation, at both an individual and an organizational level, in order to enhance knowledge management. We designed a graph-theoretic knowledge model, the expert knowledge map (EKM), based on literature-based annotation. A case study in the domain of Traditional Chinese Medicine research was used to illustrate the usefulness of the model. The EKM successfully captured various aspects of knowledge and enhanced knowledge management within the case-study organization through the provision of knowledge graphs, expert graphs, and expert-knowledge biography. Our model could help to reveal the hot topics, trends, and products of the research done by an organization. It can potentially be used to facilitate knowledge learning, sharing and decision-making among researchers, academicians, students, and administrators of organizations.
Spatial-temporal causal modeling: a data centric approach to climate change attribution (Invited)
NASA Astrophysics Data System (ADS)
Lozano, A. C.
2010-12-01
Attribution of climate change has been predominantly based on simulations using physical climate models. These approaches rely heavily on the employed models and are thus subject to their shortcomings. Given the physical models’ limitations in describing the complex system of climate, we propose an alternative approach to climate change attribution that is data centric in the sense that it relies on actual measurements of climate variables and human and natural forcing factors. We present a novel class of methods to infer causality from spatial-temporal data, as well as a procedure to incorporate extreme value modeling into our methodology in order to address the attribution of extreme climate events. We develop a collection of causal modeling methods using spatio-temporal data that combine graphical modeling techniques with the notion of Granger causality. “Granger causality” is an operational definition of causality from econometrics, which is based on the premise that if a variable causally affects another, then the past values of the former should be helpful in predicting the future values of the latter. In its basic version, our methodology makes use of the spatial relationship between the various data points, but treats each location as being identically distributed and builds a unique causal graph that is common to all locations. A more flexible framework is then proposed that is less restrictive than having a single causal graph common to all locations, while avoiding the brittleness due to data scarcity that might arise if one were to independently learn a different graph for each location. The solution we propose can be viewed as finding a middle ground by partitioning the locations into subsets that share the same causal structures and pooling the observations from all the time series belonging to the same subset in order to learn more robust causal graphs. More precisely, we make use of relationships between locations (e.g. neighboring relationship) by defining a relational graph in which related locations are connected (note that this relational graph, which represents relationships among the different locations, is distinct from the causal graph, which represents causal relationships among the individual variables - e.g. temperature, pressure- within a multivariate time series). We then define a hidden Markov Random Field (hMRF), assigning a hidden state to each node (location), with the state assignment guided by the prior information encoded in the relational graph. Nodes that share the same state in the hMRF model will have the same causal graph. State assignment can thus shed light on unknown relations among locations (e.g. teleconnection). While the model has been described in terms of hard location partitioning to facilitate its exposition, in fact a soft partitioning is maintained throughout learning. This leads to a form of transfer learning, which makes our model applicable even in situations where partitioning the locations might not seem appropriate. We first validate the effectiveness of our methodology on synthetic datasets, and then apply it to actual climate measurement data. The experimental results show that our approach offers a useful alternative to the simulation-based approach for climate modeling and attribution, and has the capability to provide valuable scientific insights from a new perspective.
Conclusiveness of natural languages and recognition of images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojcik, Z.M.
1983-01-01
The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It ismore » suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.« less
An asynchronous traversal engine for graph-based rich metadata management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Carns, Philip; Ross, Robert B.
Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less
An asynchronous traversal engine for graph-based rich metadata management
Dai, Dong; Carns, Philip; Ross, Robert B.; ...
2016-06-23
Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less
Expanding our understanding of students' use of graphs for learning physics
NASA Astrophysics Data System (ADS)
Laverty, James T.
It is generally agreed that the ability to visualize functional dependencies or physical relationships as graphs is an important step in modeling and learning. However, several studies in Physics Education Research (PER) have shown that many students in fact do not master this form of representation and even have misconceptions about the meaning of graphs that impede learning physics concepts. Working with graphs in classroom settings has been shown to improve student abilities with graphs, particularly when the students can interact with them. We introduce a novel problem type in an online homework system, which requires students to construct the graphs themselves in free form, and requires no hand-grading by instructors. A study of pre/post-test data using the Test of Understanding Graphs in Kinematics (TUG-K) over several semesters indicates that students learn significantly more from these graph construction problems than from the usual graph interpretation problems, and that graph interpretation alone may not have any significant effect. The interpretation of graphs, as well as the representation translation between textual, mathematical, and graphical representations of physics scenarios, are frequently listed among the higher order thinking skills we wish to convey in an undergraduate course. But to what degree do we succeed? Do students indeed employ higher order thinking skills when working through graphing exercises? We investigate students working through a variety of graph problems, and, using a think-aloud protocol, aim to reconstruct the cognitive processes that the students go through. We find that to a certain degree, these problems become commoditized and do not trigger the desired higher order thinking processes; simply translating ``textbook-like'' problems into the graphical realm will not achieve any additional educational goals. Whether the students have to interpret or construct a graph makes very little difference in the methods used by the students. We will also look at the results of using graph problems in an online learning environment. We will show evidence that construction problems lead to a higher degree of difficulty and degree of discrimination than other graph problems and discuss the influence the course has on these variables.
Mathematical Modeling Of Life-Support Systems
NASA Technical Reports Server (NTRS)
Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.
1994-01-01
Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.
Space Generic Open Avionics Architecture (SGOAA): Overview
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1992-01-01
A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.
From Many Records to One Graph: Heterogeneity Conflicts in the Linked Data Restructuring Cycle
ERIC Educational Resources Information Center
Tallerås, Kim
2013-01-01
Introduction: During the last couple of years the library community has developed a number of comprehensive metadata standardization projects inspired by the idea of linked data, such as the BIBFRAME model. Linked data is a set of best practice principles of publishing and exposing data on the Web utilizing a graph based data model powered with…
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Exotic equilibria of Harary graphs and a new minimum degree lower bound for synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canale, Eduardo A., E-mail: ecanale@pol.una.py; Monzón, Pablo, E-mail: monzon@fing.edu.uy
2015-02-15
This work is concerned with stability of equilibria in the homogeneous (equal frequencies) Kuramoto model of weakly coupled oscillators. In 2012 [R. Taylor, J. Phys. A: Math. Theor. 45, 1–15 (2012)], a sufficient condition for almost global synchronization was found in terms of the minimum degree–order ratio of the graph. In this work, a new lower bound for this ratio is given. The improvement is achieved by a concrete infinite sequence of regular graphs. Besides, non standard unstable equilibria of the graphs studied in Wiley et al. [Chaos 16, 015103 (2006)] are shown to exist as conjectured in that work.
Counting the number of Feynman graphs in QCD
NASA Astrophysics Data System (ADS)
Kaneko, T.
2018-05-01
Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.
Competition in prescription drug markets: the roles of trademarks, advertising, and generic names.
Feldman, Roger; Lobo, Félix
2013-08-01
We take on two subjects of controversy among economists-advertising and trademarks-in the context of the market for generic drugs. We outline a model in which trademarks for drug names reduce search costs but increase product differentiation. In this particular framework, trademarks may not benefit consumers. In contrast, the generic names of drugs or "International Nonproprietary Names" (INN) have unquestionable benefits in both economic theory and empirical studies. We offer a second model where advertising of a brand-name drug creates recognition for the generic name. The monopoly patent-holder advertises less than in the absence of a competitive spillover.
ERIC Educational Resources Information Center
Bacharach, Samuel; Bamberger, Peter
1992-01-01
Survey data from 215 nurses (10 male) and 430 civil engineers (10 female) supported the plausibility of occupation-specific models (positing direct paths between role stressors, antecedents, and consequences) compared to generic models. A weakness of generic models is the tendency to ignore differences in occupational structure and culture. (SK)
DOT National Transportation Integrated Search
1982-03-01
The Systems Analysis Research Unit at the Civil Aeromedical Institute (CAMI) has developed a generic model for Federal Aviation Administration (FAA) Academy training program evaluation. The model will serve as a basis for integrating the total data b...
Self-similarity analysis of eubacteria genome based on weighted graph.
Qi, Zhao-Hui; Li, Ling; Zhang, Zhi-Meng; Qi, Xiao-Qin
2011-07-07
We introduce a weighted graph model to investigate the self-similarity characteristics of eubacteria genomes. The regular treating in similarity comparison about genome is to discover the evolution distance among different genomes. Few people focus their attention on the overall statistical characteristics of each gene compared with other genes in the same genome. In our model, each genome is attributed to a weighted graph, whose topology describes the similarity relationship among genes in the same genome. Based on the related weighted graph theory, we extract some quantified statistical variables from the topology, and give the distribution of some variables derived from the largest social structure in the topology. The 23 eubacteria recently studied by Sorimachi and Okayasu are markedly classified into two different groups by their double logarithmic point-plots describing the similarity relationship among genes of the largest social structure in genome. The results show that the proposed model may provide us with some new sights to understand the structures and evolution patterns determined from the complete genomes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Unwinding the hairball graph: Pruning algorithms for weighted complex networks
NASA Astrophysics Data System (ADS)
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
Model validation of simple-graph representations of metabolism
Holme, Petter
2009-01-01
The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
Graph configuration model based evaluation of the education-occupation match
2018-01-01
To study education—occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education. PMID:29509783
Graph configuration model based evaluation of the education-occupation match.
Gadar, Laszlo; Abonyi, Janos
2018-01-01
To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.
Graphical Language for Data Processing
NASA Technical Reports Server (NTRS)
Alphonso, Keith
2011-01-01
A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.
Modeling of Protection in Dynamic Simulation Using Generic Relay Models and Settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samaan, Nader A.; Dagle, Jeffery E.; Makarov, Yuri V.
This paper shows how generic protection relay models available in planning tools can be augmented with settings that are based on NERC standards or best engineering practice. Selected generic relay models in Siemens PSS®E have been used in dynamic simulations in the proposed approach. Undervoltage, overvoltage, underfrequency, and overfrequency relays have been modeled for each generating unit. Distance-relay protection was modeled for transmission system protection. Two types of load-shedding schemes were modeled: underfrequency (frequency-responsive non-firm load shedding) and underfrequency and undervoltage firm load shedding. Several case studies are given to show the impact of protection devices on dynamic simulations. Thismore » is useful for simulating cascading outages.« less
Unimodular lattice triangulations as small-world and scale-free random graphs
NASA Astrophysics Data System (ADS)
Krüger, B.; Schmidt, E. M.; Mecke, K.
2015-02-01
Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.
Accelerating semantic graph databases on commodity clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Haglin, David J.
We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.
A software tool for dataflow graph scheduling
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1994-01-01
A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.
2015-09-21
this framework, MIT LL carried out a one-year proof- of-concept study to determine the capabilities and challenges in the detection of anomalies in...extremely large graphs [5]. Under this effort, two real datasets were considered, and algorithms for data modeling and anomaly detection were developed...is required in a well-defined experimental framework for the detection of anomalies in very large graphs. This study is intended to inform future
Evolution of a Modified Binomial Random Graph by Agglomeration
NASA Astrophysics Data System (ADS)
Kang, Mihyun; Pachon, Angelica; Rodríguez, Pablo M.
2018-02-01
In the classical Erdős-Rényi random graph G( n, p) there are n vertices and each of the possible edges is independently present with probability p. The random graph G( n, p) is homogeneous in the sense that all vertices have the same characteristics. On the other hand, numerous real-world networks are inhomogeneous in this respect. Such an inhomogeneity of vertices may influence the connection probability between pairs of vertices. The purpose of this paper is to propose a new inhomogeneous random graph model which is obtained in a constructive way from the Erdős-Rényi random graph G( n, p). Given a configuration of n vertices arranged in N subsets of vertices (we call each subset a super-vertex), we define a random graph with N super-vertices by letting two super-vertices be connected if and only if there is at least one edge between them in G( n, p). Our main result concerns the threshold for connectedness. We also analyze the phase transition for the emergence of the giant component and the degree distribution. Even though our model begins with G( n, p), it assumes the existence of some community structure encoded in the configuration. Furthermore, under certain conditions it exhibits a power law degree distribution. Both properties are important for real-world applications.
ERIC Educational Resources Information Center
Beeken, Paul
2014-01-01
Graphing is an essential skill that forms the foundation of any physical science. Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations. Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary…
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya
2011-12-01
We study the statistical behavior under random sequential renormalization (RSR) of several network models including Erdös-Rényi (ER) graphs, scale-free networks, and an annealed model related to ER graphs. In RSR the network is locally coarse grained by choosing at each renormalization step a node at random and joining it to all its neighbors. Compared to previous (quasi-)parallel renormalization methods [Song et al., Nature (London) 433, 392 (2005)], RSR allows a more fine-grained analysis of the renormalization group (RG) flow and unravels new features that were not discussed in the previous analyses. In particular, we find that all networks exhibit a second-order transition in their RG flow. This phase transition is associated with the emergence of a giant hub and can be viewed as a new variant of percolation, called agglomerative percolation. We claim that this transition exists also in previous graph renormalization schemes and explains some of the scaling behavior seen there. For critical trees it happens as N/N(0) → 0 in the limit of large systems (where N(0) is the initial size of the graph and N its size at a given RSR step). In contrast, it happens at finite N/N(0) in sparse ER graphs and in the annealed model, while it happens for N/N(0) → 1 on scale-free networks. Critical exponents seem to depend on the type of the graph but not on the average degree and obey usual scaling relations for percolation phenomena. For the annealed model they agree with the exponents obtained from a mean-field theory. At late times, the networks exhibit a starlike structure in agreement with the results of Radicchi et al. [Phys. Rev. Lett. 101, 148701 (2008)]. While degree distributions are of main interest when regarding the scheme as network renormalization, mass distributions (which are more relevant when considering "supernodes" as clusters) are much easier to study using the fast Newman-Ziff algorithm for percolation, allowing us to obtain very high statistics.
Meyer-Bäse, Anke; Roberts, Rodney G.; Illan, Ignacio A.; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja
2017-01-01
Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts. PMID:29051730
Meyer-Bäse, Anke; Roberts, Rodney G; Illan, Ignacio A; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja
2017-01-01
Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Homer; Ashok Varikuti; Xinming Ou
Various tools exist to analyze enterprise network systems and to produce attack graphs detailing how attackers might penetrate into the system. These attack graphs, however, are often complex and difficult to comprehend fully, and a human user may find it problematic to reach appropriate configuration decisions. This paper presents methodologies that can 1) automatically identify portions of an attack graph that do not help a user to understand the core security problems and so can be trimmed, and 2) automatically group similar attack steps as virtual nodes in a model of the network topology, to immediately increase the understandability ofmore » the data. We believe both methods are important steps toward improving visualization of attack graphs to make them more useful in configuration management for large enterprise networks. We implemented our methods using one of the existing attack-graph toolkits. Initial experimentation shows that the proposed approaches can 1) significantly reduce the complexity of attack graphs by trimming a large portion of the graph that is not needed for a user to understand the security problem, and 2) significantly increase the accessibility and understandability of the data presented in the attack graph by clearly showing, within a generated visualization of the network topology, the number and type of potential attacks to which each host is exposed.« less
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
Supervoxels for graph cuts-based deformable image registration using guided image filtering
NASA Astrophysics Data System (ADS)
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-11-01
We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.
Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A
2017-10-04
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-01-01
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433
Experiment of Enzyme Kinetics Using Guided Inquiry Model for Enhancing Generic Science Skills
NASA Astrophysics Data System (ADS)
Amida, N.; Supriyanti, F. M. T.; Liliasari
2017-02-01
This study aims to enhance generic science skills of students using guided inquiry model through experiments of enzyme kinetics. This study used quasi-experimental methods, with pretest-posttestnonequivalent control group design. Subjects of this study were chemistry students enrolled in biochemistry lab course, consisted of 18 students in experimental class and 19 students in control class. Instrument in this study were essay test that involves 5 indicators of generic science skills (i.e. direct observation, causality, symbolic language, mathematical modeling, and concepts formation) and also student worksheets. The results showed that the experiments of kinetics enzyme using guided inquiry model have been enhance generic science skills in high category with a value of
NASA Technical Reports Server (NTRS)
Stovall, John R.; Wray, Richard B.
1994-01-01
This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.
Quantum gravity as an information network self-organization of a 4D universe
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-10-01
I propose a quantum gravity model in which the fundamental degrees of freedom are information bits for both discrete space-time points and links connecting them. The Hamiltonian is a very simple network model consisting of a ferromagnetic Ising model for space-time vertices and an antiferromagnetic Ising model for the links. As a result of the frustration between these two terms, the ground state self-organizes as a new type of low-clustering graph with finite Hausdorff dimension 4. The spectral dimension is lower than the Hausdorff dimension: it coincides with the Hausdorff dimension 4 at a first quantum phase transition corresponding to an IR fixed point, while at a second quantum phase transition describing small scales space-time dissolves into disordered information bits. The large-scale dimension 4 of the universe is related to the upper critical dimension 4 of the Ising model. At finite temperatures the universe graph emerges without a big bang and without singularities from a ferromagnetic phase transition in which space-time itself forms out of a hot soup of information bits. When the temperature is lowered the universe graph unfolds and expands by lowering its connectivity, a mechanism I have called topological expansion. The model admits topological black hole excitations corresponding to graphs containing holes with no space-time inside and with "Schwarzschild-like" horizons with a lower spectral dimension.
Lee, Hansang; Hong, Helen; Kim, Junmo
2014-12-01
We propose a graph-cut-based segmentation method for the anterior cruciate ligament (ACL) in knee MRI with a novel shape prior and label refinement. As the initial seeds for graph cuts, candidates for the ACL and the background are extracted from knee MRI roughly by means of adaptive thresholding with Gaussian mixture model fitting. The extracted ACL candidate is segmented iteratively by graph cuts with patient-specific shape constraints. Two shape constraints termed fence and neighbor costs are suggested such that the graph cuts prevent any leakage into adjacent regions with similar intensity. The segmented ACL label is refined by means of superpixel classification. Superpixel classification makes the segmented label propagate into missing inhomogeneous regions inside the ACL. In the experiments, the proposed method segmented the ACL with Dice similarity coefficient of 66.47±7.97%, average surface distance of 2.247±0.869, and root mean squared error of 3.538±1.633, which increased the accuracy by 14.8%, 40.3%, and 37.6% from the Boykov model, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Entropy of spatial network ensembles
NASA Astrophysics Data System (ADS)
Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis
2018-04-01
We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.
Teaching Physics with Basketball
NASA Astrophysics Data System (ADS)
Chanpichai, N.; Wattanakasiwich, P.
2010-07-01
Recently, technologies and computer takes important roles in learning and teaching, including physics. Advance in technologies can help us better relating physics taught in the classroom to the real world. In this study, we developed a module on teaching a projectile motion through shooting a basketball. Students learned about physics of projectile motion, and then they took videos of their classmates shooting a basketball by using the high speed camera. Then they analyzed videos by using Tracker, a video analysis and modeling tool. While working with Tracker, students learned about the relationships between three kinematics graphs. Moreover, they learned about a real projectile motion (with an air resistance) through modeling tools. Students' abilities to interpret kinematics graphs were investigated before and after the instruction by using the Test of Understanding Graphs in Kinematics (TUG-K). The maximum normalized gain or
Graph theory findings in the pathophysiology of temporal lobe epilepsy
Chiang, Sharon; Haneef, Zulfi
2014-01-01
Temporal lobe epilepsy (TLE) is the most common form of adult epilepsy. Accumulating evidence has shown that TLE is a disorder of abnormal epileptogenic networks, rather than focal sources. Graph theory allows for a network-based representation of TLE brain networks, and has potential to illuminate characteristics of brain topology conducive to TLE pathophysiology, including seizure initiation and spread. We review basic concepts which we believe will prove helpful in interpreting results rapidly emerging from graph theory research in TLE. In addition, we summarize the current state of graph theory findings in TLE as they pertain its pathophysiology. Several common findings have emerged from the many modalities which have been used to study TLE using graph theory, including structural MRI, diffusion tensor imaging, surface EEG, intracranial EEG, magnetoencephalography, functional MRI, cell cultures, simulated models, and mouse models, involving increased regularity of the interictal network configuration, altered local segregation and global integration of the TLE network, and network reorganization of temporal lobe and limbic structures. As different modalities provide different views of the same phenomenon, future studies integrating data from multiple modalities are needed to clarify findings and contribute to the formation of a coherent theory on the pathophysiology of TLE. PMID:24831083
AGM: A DSL for mobile cloud computing based on directed graph
NASA Astrophysics Data System (ADS)
Tanković, Nikola; Grbac, Tihana Galinac
2016-06-01
This paper summarizes a novel approach for consuming a domain specific language (DSL) by transforming it to a directed graph representation persisted by a graph database. Using such specialized database enables advanced navigation trough the stored model exposing only relevant subsets of meta-data to different involved services and components. We applied this approach in a mobile cloud computing system and used it to model several mobile applications in retail, supply chain management and merchandising domain. These application are distributed in a Software-as-a-Service (SaaS) fashion and used by thousands of customers in Croatia. We report on lessons learned and propose further research on this topic.
Nelson, Carl A; Miller, David J; Oleynikov, Dmitry
2008-01-01
As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.
ERIC Educational Resources Information Center
Pennington, Robert; Koehler, Mallory
2017-01-01
There is limited research on teaching narrative writing skills to students with moderate to severe intellectual disability. In the current study, we used a multiple probe across participants single case design to evaluate the effects of an intervention package comprised of modeling, story templates, and self-graphing, on the inclusion of story…
Das, Manisha; Choudhury, Supriyo; Maity, Somnath; Hazra, Avijit; Pradhan, Tirthankar; Pal, Aishee; Roy, Ranendra Kumar
2017-01-01
Background: The concept of generic prescription is widely accepted in various parts of the world. Nevertheless, it has failed to gain popularity in India due to factors such as nonavailability and distrust on the product quality. However, since 2012, the Government of West Bengal, India, has initiated exclusive generic drug outlets called “fair price medicine shop” (FPMS) inside the government hospital premises in a “public-private-partnership” model. This study was undertaken to evaluate the experience and attitude of patients who were consuming generic drugs purchased from these FPMS. Materials and Methods: It was a questionnaire-based cross-sectional study where we have interviewed 100 patients each consuming generic and branded drugs, respectively. The perceived effectiveness, reported safety, medication adherence, cost of therapy, and availability of drugs was compared between two mentioned groups. Medication adherence was estimated through Drug Attitude Inventory-10. Results: 93% of generic and 87% branded drug users believed that their drugs were effective (P = 0.238) in controlling their ailments. No significant difference (9% generic, 10% branded drug users, P = 1.000) was observed in reported adverse effects between generic and branded drug users. 82% and 77% of patients were adherent generic and branded drugs, respectively (P = 0.289). As expected, a significantly lower cost of generic drugs was observed compared to its branded counterpart. Conclusion: The policy of FPMS implemented by the Government of West Bengal, India appeared to be promising in terms of perceived effectiveness, safety, and adherence of generic drugs from FPMS compared to drugs purchased from open market retailers. Therefore, this study might act as an impetus for the policy-makers to initiate similar models across the country. PMID:28250671
Comparison of generic-to-brand switchback patterns for generic and authorized generic drugs
Hansen, Richard A.; Qian, Jingjing; Berg, Richard; Linneman, James; Seoane-Vazquez, Enrique; Dutcher, Sarah K.; Raofi, Saeid; Page, C. David; Peissig, Peggy
2018-01-01
Background While generic drugs are therapeutically equivalent to brand drugs, some patients and healthcare providers remain uncertain about whether they produce identical outcomes. Authorized generics, which are identical in formulation to corresponding brand drugs but marketed as a generic, provide a unique post-marketing opportunity to study whether utilization patterns are influenced by perceptions of generic drugs. Objectives To compare generic-to-brand switchback rates between generics and authorized generics. Methods A retrospective cohort study was conducted using claims and electronic health records data from a regional U.S. healthcare system. Ten drugs with authorized generics and generics marketed between 1999 and 2014 were evaluated. Eligible adult patients received a brand drug during the 6 months preceding generic entry, and then switched to a generic or authorized generic. Patients in this cohort were followed for up to 30 months from the index switch date to evaluate occurrence of generic-to-brand switchbacks. Switchback rates were compared between patients on authorized generics versus generics using Kaplan-Meier curves and Cox proportional hazards models, controlling for individual drug effects, age, sex, Charlson comorbidity score, pre-index drug use characteristics, and pre-index healthcare utilization. Results Among 5,542 unique patients that switched from brand-to-generic or brand-to-authorized generic, 264 (4.8%) switched back to the brand drug. Overall switchback rates were similar for authorized generics compared with generics (HR=0.86; 95% CI 0.65-1.15). The likelihood of switchback was higher for alendronate (HR=1.64; 95% CI 1.20-2.23) and simvastatin (HR=1.81; 95% CI 1.30-2.54) and lower for amlodipine (HR=0.27; 95% CI 0.17-0.42) compared with other drugs in the cohort. Conclusions Overall switchback rates were similar between authorized generic and generic drug users, indirectly supporting similar efficacy and tolerability profiles for brand and generic drugs. Reasons for differences in switchback rates among specific products need to be further explored. PMID:28152215
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
Rodriguez, Carlos A.; Agudelo, Maria; Aguilar, Yudy A.; Zuluaga, Andres F.
2016-01-01
Previous studies have demonstrated that pharmaceutical equivalence and pharmacokinetic equivalence of generic antibiotics are necessary but not sufficient conditions to guarantee therapeutic equivalence (better called pharmacodynamic equivalence). In addition, there is scientific evidence suggesting a direct link between pharmacodynamic nonequivalence of generic vancomycin and promotion of resistance in Staphylococcus aureus. To find out if even subtle deviations from the expected pharmacodynamic behavior with respect to the innovator could favor resistance, we studied a generic product of piperacillin-tazobactam characterized by pharmaceutical and pharmacokinetic equivalence but a faulty fit of Hill’s Emax sigmoid model that could be interpreted as pharmacodynamic nonequivalence. We determined the impact in vivo of this generic product on the resistance of a mixed Escherichia coli population composed of ∼99% susceptible cells (ATCC 35218 strain) and a ∼1% isogenic resistant subpopulation that overproduces TEM-1 β-lactamase. After only 24 hours of treatment in the neutropenic murine thigh infection model, the generic amplified the resistant subpopulation up to 20-times compared with the innovator, following an inverted-U dose-response relationship. These findings highlight the critical role of therapeutic nonequivalence of generic antibiotics as a key factor contributing to the global problem of bacterial resistance. PMID:27191163
Canada's New Generic Pricing Policy: A Reasoned Approach to a Challenging Problem.
Hollis, Aidan; Grootendorst, Paul
2015-08-01
Alberta, quickly followed by other Canadian provinces, has introduced a new pricing model for generic drugs, in which prices are inversely related to the number of generic manufacturers of the drug. This paper examines the rationale for the new policy. Copyright © 2015 Longwoods Publishing.
Graphs to estimate an individualized risk of breast cancer.
Benichou, J; Gail, M H; Mulvihill, J J
1996-01-01
Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
Synchronizability of random rectangular graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
Analysis of quantum error correction with symmetric hypergraph states
NASA Astrophysics Data System (ADS)
Wagner, T.; Kampermann, H.; Bruß, D.
2018-03-01
Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.
GraphMeta: Managing HPC Rich Metadata in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Chen, Yong; Carns, Philip
High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less
Bounds for percolation thresholds on directed and undirected graphs
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen; Pryadko, Leonid
2015-03-01
Percolation theory is an efficient approach to problems with strong disorder, e.g., in quantum or classical transport, composite materials, and diluted magnets. Recently, the growing role of big data in scientific and industrial applications has led to a renewed interest in graph theory as a tool for describing complex connections in various kinds of networks: social, biological, technological, etc. In particular, percolation on graphs has been used to describe internet stability, spread of contagious diseases and computer viruses; related models describe market crashes and viral spread in social networks. We consider site-dependent percolation on directed and undirected graphs, and present several exact bounds for location of the percolation transition in terms of the eigenvalues of matrices associated with graphs, including the adjacency matrix and the Hashimoto matrix used to enumerate non-backtracking walks. These bounds correspond t0 a mean field approximation and become asymptotically exact for graphs with no short cycles. We illustrate this convergence numerically by simulating percolation on several families of graphs with different cycle lengths. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
The Replicator Equation on Graphs
Ohtsuki, Hisashi; Nowak, Martin A.
2008-01-01
We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth-death’, ‘death-birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth-death updating in our model. We use pair-approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner’s Dilemma, the Snow-Drift game, a coordination game and the Rock-Scissors-Paper game. PMID:16860343
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
Automated visualization of rule-based models
Tapia, Jose-Juan; Faeder, James R.
2017-01-01
Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816
Christensen, Jette; Vallières, André
2016-01-01
"Freedom from animal disease" is an ambiguous concept that may have a different meaning in trade and science. For trade alone, there are different levels of freedom from OIE listed diseases. A country can: be recognized by OIE to be "officially free"; self-declare freedom, with no official recognition by the OIE; or report animal disease as absent (no occurrence) in six-monthly reports. In science, we apply scenario tree models to calculate the probability of a population being free from disease at a given prevalence to provide evidence of freedom from animal disease. Here, we link science with application by describing how a scenario tree model may contribute to a country's claim of freedom from animal disease. We combine the idea of a standardized presentation of scenario tree models for disease freedom and having a similar model for two different animal diseases to suggest that a simple generic model may help veterinary authorities to build and evaluate scenario tree models for disease freedom. Here, we aim to develop a generic scenario tree model for disease freedom that is: animal species specific, population specific, and has a simple structure. The specific objectives were: to explore the levels of freedom described in the OIE Terrestrial Animal Health Code; to describe how scenario tree models may contribute to a country's claim of freedom from animal disease; and to present a generic swine scenario tree model for disease freedom in Canada's domestic (commercial) swine applied to Aujeszky's disease (AD). In particular, to explore how historical survey data, and data mining may affect the probability of freedom and to explore different sampling strategies. Finally, to frame the generic scenario tree model in the context of Canada's claim of freedom from AD. We found that scenario tree models are useful to support a country's claim of freedom either as "recognized officially free" or as part of a self-declaration but the models should not stand alone in a claim. The generic AD scenario tree model demonstrated the benefit of combining three sources of surveillance data and helped to design the surveillance for the next year. The generic AD scenario model is one piece in Canada's self-declaration of freedom from AD. The model is strongly supported by the fact that AD has never been detected in Canada. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Topology polymorphism graph for lung tumor segmentation in PET-CT images.
Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Eberl, Stefan; Yin, Yong; Feng, Dagan; Fulham, Michael
2015-06-21
Accurate lung tumor segmentation is problematic when the tumor boundary or edge, which reflects the advancing edge of the tumor, is difficult to discern on chest CT or PET. We propose a 'topo-poly' graph model to improve identification of the tumor extent. Our model incorporates an intensity graph and a topology graph. The intensity graph provides the joint PET-CT foreground similarity to differentiate the tumor from surrounding tissues. The topology graph is defined on the basis of contour tree to reflect the inclusion and exclusion relationship of regions. By taking into account different topology relations, the edges in our model exhibit topological polymorphism. These polymorphic edges in turn affect the energy cost when crossing different topology regions under a random walk framework, and hence contribute to appropriate tumor delineation. We validated our method on 40 patients with non-small cell lung cancer where the tumors were manually delineated by a clinical expert. The studies were separated into an 'isolated' group (n = 20) where the lung tumor was located in the lung parenchyma and away from associated structures / tissues in the thorax and a 'complex' group (n = 20) where the tumor abutted / involved a variety of adjacent structures and had heterogeneous FDG uptake. The methods were validated using Dice's similarity coefficient (DSC) to measure the spatial volume overlap and Hausdorff distance (HD) to compare shape similarity calculated as the maximum surface distance between the segmentation results and the manual delineations. Our method achieved an average DSC of 0.881 ± 0.046 and HD of 5.311 ± 3.022 mm for the isolated cases and DSC of 0.870 ± 0.038 and HD of 9.370 ± 3.169 mm for the complex cases. Student's t-test showed that our model outperformed the other methods (p-values <0.05).
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
2011-01-01
Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854
Signalling Network Construction for Modelling Plant Defence Response
Miljkovic, Dragana; Stare, Tjaša; Mozetič, Igor; Podpečan, Vid; Petek, Marko; Witek, Kamil; Dermastia, Marina; Lavrač, Nada; Gruden, Kristina
2012-01-01
Plant defence signalling response against various pathogens, including viruses, is a complex phenomenon. In resistant interaction a plant cell perceives the pathogen signal, transduces it within the cell and performs a reprogramming of the cell metabolism leading to the pathogen replication arrest. This work focuses on signalling pathways crucial for the plant defence response, i.e., the salicylic acid, jasmonic acid and ethylene signal transduction pathways, in the Arabidopsis thaliana model plant. The initial signalling network topology was constructed manually by defining the representation formalism, encoding the information from public databases and literature, and composing a pathway diagram. The manually constructed network structure consists of 175 components and 387 reactions. In order to complement the network topology with possibly missing relations, a new approach to automated information extraction from biological literature was developed. This approach, named Bio3graph, allows for automated extraction of biological relations from the literature, resulting in a set of (component1, reaction, component2) triplets and composing a graph structure which can be visualised, compared to the manually constructed topology and examined by the experts. Using a plant defence response vocabulary of components and reaction types, Bio3graph was applied to a set of 9,586 relevant full text articles, resulting in 137 newly detected reactions between the components. Finally, the manually constructed topology and the new reactions were merged to form a network structure consisting of 175 components and 524 reactions. The resulting pathway diagram of plant defence signalling represents a valuable source for further computational modelling and interpretation of omics data. The developed Bio3graph approach, implemented as an executable language processing and graph visualisation workflow, is publically available at http://ropot.ijs.si/bio3graph/and can be utilised for modelling other biological systems, given that an adequate vocabulary is provided. PMID:23272172
Assessment of tautomer distribution using the condensed reaction graph approach
NASA Astrophysics Data System (ADS)
Gimadiev, T. R.; Madzhidov, T. I.; Nugmanov, R. I.; Baskin, I. I.; Antipin, I. S.; Varnek, A.
2018-03-01
We report the first direct QSPR modeling of equilibrium constants of tautomeric transformations (logK T ) in different solvents and at different temperatures, which do not require intermediate assessment of acidity (basicity) constants for all tautomeric forms. The key step of the modeling consisted in the merging of two tautomers in one sole molecular graph ("condensed reaction graph") which enables to compute molecular descriptors characterizing entire equilibrium. The support vector regression method was used to build the models. The training set consisted of 785 transformations belonging to 11 types of tautomeric reactions with equilibrium constants measured in different solvents and at different temperatures. The models obtained perform well both in cross-validation (Q2 = 0.81 RMSE = 0.7 logK T units) and on two external test sets. Benchmarking studies demonstrate that our models outperform results obtained with DFT B3LYP/6-311 ++ G(d,p) and ChemAxon Tautomerizer applicable only in water at room temperature.
Modeling and optimum time performance for concurrent processing
NASA Technical Reports Server (NTRS)
Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy
1988-01-01
The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-05-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-06-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Voss, Frank D.; Mastin, Mark C.
2012-01-01
A database was developed to automate model execution and to provide users with Internet access to voluminous data products ranging from summary figures to model output timeseries. Database-enabled Internet tools were developed to allow users to create interactive graphs of output results based on their analysis needs. For example, users were able to create graphs by selecting time intervals, greenhouse gas emission scenarios, general circulation models, and specific hydrologic variables.
Topological structure of dictionary graphs
NASA Astrophysics Data System (ADS)
Fukś, Henryk; Krzemiński, Mark
2009-09-01
We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers—that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 103 and 104. A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.
As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which themore » ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;« less
Learning molecular energies using localized graph kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Exploring the evolution of London's street network in the information space: A dual approach
NASA Astrophysics Data System (ADS)
Masucci, A. Paolo; Stanilov, Kiril; Batty, Michael
2014-01-01
We study the growth of London's street network in its dual representation, as the city has evolved over the past 224 years. The dual representation of a planar graph is a content-based network, where each node is a set of edges of the planar graph and represents a transportation unit in the so-called information space, i.e., the space where information is handled in order to navigate through the city. First, we discuss a novel hybrid technique to extract dual graphs from planar graphs, called the hierarchical intersection continuity negotiation principle. Then we show that the growth of the network can be analytically described by logistic laws and that the topological properties of the network are governed by robust log-normal distributions characterizing the network's connectivity and small-world properties that are consistent over time. Moreover, we find that the double-Pareto-like distributions for the connectivity emerge for major roads and can be modeled via a stochastic content-based network model using simple space-filling principles.
Treur, Maarten; Heeg, Bart; Möller, Hans-Jürgen; Schmeding, Annette; van Hout, Ben
2009-02-18
As schizophrenia patients are typically suspicious of, or are hostile to changes they may be reluctant to accept generic substitution, possibly affecting compliance. This may counteract drug costs savings due to less symptom control and increased hospitalization risk. Although compliance losses following generic substitution have not been quantified so far, one can estimate the possible health-economic consequences. The current study aims to do so by considering the case of risperidone in Germany. An existing DES model was adapted to compare staying on branded risperidone with generic substitution. Differences include the probability of non-compliance and medication costs. Incremental probability of non-compliance after generic substitution was varied between 2.5% and 10%, while generic medication costs were assumed to be 40% lower. Effect of medication price was assessed as well as the effect of applying compliance losses to all treatment settings. The probability of staying on branded risperidone being cost-effective was calculated for various outcomes of a hypothetical study that would investigate non-compliance following generic substitution of risperidone. If the incremental probability of non-compliance after generic substitution is 2.5%, 5.0%, 7.5% and 10% respectively, incremental effects of staying on branded risperidone are 0.004, 0.007, 0.011 and 0.015 Quality Adjusted Life Years (QALYs). Incremental costs are euro757, euro343, -euro123 and -euro554 respectively. Benefits of staying on branded risperidone include improved symptom control and fewer hospitalizations. If generic substitution results in a 5.2% higher probability of non-compliance, the model predicts staying on branded risperidone to be cost-effective (NICE threshold of 30,000 per QALY gained). Compliance losses of more than 6.9% makes branded risperidone the dominant alternative. Results are sensitive to the locations at which compliance loss is applied and the price of generic risperidone. The probability that staying on branded risperidone is cost-effective would increase with larger compliance differences and more patients included in the hypothetical study. The model predicts that it is cost-effective to keep a patient with schizophrenia in Germany on branded risperidone instead of switching him/her to generic risperidone (assuming a 40% reduction in medication costs), if the incremental probability of becoming non-compliant after generic substitution exceeds 5.2%.
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
A preliminary study on atrial epicardial mapping signals based on Graph Theory.
Sun, Liqian; Yang, Cuiwei; Zhang, Lin; Chen, Ying; Wu, Zhong; Shao, Jun
2014-07-01
In order to get a better understanding of atrial fibrillation, we introduced a method based on Graph Theory to interpret the relations of different parts of the atria. Atrial electrograms under sinus rhythm and atrial fibrillation were collected from eight living mongrel dogs with cholinergic AF model. These epicardial signals were acquired from 95 unipolar electrodes attached to the surface of the atria and four pulmonary veins. Then, we analyzed the electrode correlations using Graph Theory. The topology, the connectivity and the parameters of graphs during different rhythms were studied. Our results showed that the connectivity of graphs varied from sinus rhythm to atrial fibrillation and there were parameter gradients in various parts of the atria. The results provide spatial insight into the interaction between different parts of the atria and the method may have its potential for studying atrial fibrillation. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Projected power iteration for network alignment
NASA Astrophysics Data System (ADS)
Onaran, Efe; Villar, Soledad
2017-08-01
The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erd-s-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.
Chasin, Rachel; Rumshisky, Anna; Uzuner, Ozlem; Szolovits, Peter
2014-01-01
Objective To evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources. Materials and methods The graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation. Results The topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40–50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help. Discussion Although topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words. Conclusions Topic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited. PMID:24441986
Entropy, complexity, and Markov diagrams for random walk cancer models
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-01-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357
Area law from loop quantum gravity
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Hung, Ling-Yan; Marcianò, Antonino; Zhang, Mingyi
2018-03-01
We explore the constraints following from requiring the area law in the entanglement entropy in the context of loop quantum gravity. We find a unique solution to the single-link wave function in the large j limit, believed to be appropriate in the semiclassical limit. We then generalize our considerations to multilink coherent states, and find that the area law is preserved very generically using our single-link wave function as a building block. Finally, we develop the framework that generates families of multilink states that preserve the area law while avoiding macroscopic entanglement, the space-time analogue of "Schrödinger's cat." We note that these states, defined on a given set of graphs, are the ground states of some local Hamiltonian that can be constructed explicitly. This can potentially shed light on the construction of the appropriate Hamiltonian constraints in the LQG framework.
Agudelo, Maria; Rodriguez, Carlos A; Zuluaga, Andres F; Vesga, Omar
2015-02-01
After demonstrating with diverse intravenous antibacterials that pharmaceutical equivalence (PE) does not predict therapeutic equivalence, we tested a single generic product of piperacillin/tazobactam (TZP) in terms of PE, pharmacokinetics and in vitro/vivo pharmacodynamics against several pathogens in neutropenic mouse thigh, lung and brain infection models. A generic product was compared head-to-head against the innovator. PE was evaluated by microbiological assay. Single-dose serum pharmacokinetics were determined in infected mice, and the MIC/MBC were determined by broth microdilution. In vivo experiments were done in a blind fashion. Reproducibility was tested on different days using different infecting organisms and animal models. Neutropenic MPF mice were infected in the thighs with Staphylococcus aureus GRP-0057 or Pseudomonas aeruginosa PA01 and in the lungs or brain with Klebsiella pneumoniae ATCC 10031. Treatment started 2h (thigh and brain) or 14 h (lung) after infection and was administered every 3h over 24h (thigh and lung) or 48 h (brain). Both products exhibited the same MIC/MBC against each strain, yielded overlaid curves in the microbiological assay (P>0.21) and were bioequivalent (IC90 83-117% for AUC test/reference ratio). In vivo, the generic product and innovator were again undistinguishable in all models and against the different bacterial pathogens involved. The relevance of these neutropenic murine models of infection was established by demonstrating their accuracy to predict the biological response following simultaneous treatment with a generic product or the innovator of TZP. Therapeutic equivalence of the generic product was proved in every model and against different pathogens. Copyright © 2014 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.
Nonequilibrium thermodynamics of the shear-transformation-zone model
NASA Astrophysics Data System (ADS)
Luo, Alan M.; Ã-ttinger, Hans Christian
2014-02-01
The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.
Di Mascolo, Maria; Gouin, Alexia
2013-03-01
The work presented here is with a view to improving performance of sterilization services in hospitals. We carried out a survey in a large number of health establishments in the Rhône-Alpes region in France. Based on the results of this survey and a detailed study of a specific service, we have built a generic model. The generic nature of the model relies on a common structure with a high level of detail. This model can be used to improve the performance of a specific sterilization service and/or to dimension its resources. It can also serve for quantitative comparison of performance indicators of various sterilization services.
GeoSciGraph: An Ontological Framework for EarthCube Semantic Infrastructure
NASA Astrophysics Data System (ADS)
Gupta, A.; Schachne, A.; Condit, C.; Valentine, D.; Richard, S.; Zaslavsky, I.
2015-12-01
The CINERGI (Community Inventory of EarthCube Resources for Geosciences Interoperability) project compiles an inventory of a wide variety of earth science resources including documents, catalogs, vocabularies, data models, data services, process models, information repositories, domain-specific ontologies etc. developed by research groups and data practitioners. We have developed a multidisciplinary semantic framework called GeoSciGraph semantic ingration of earth science resources. An integrated ontology is constructed with Basic Formal Ontology (BFO) as its upper ontology and currently ingests multiple component ontologies including the SWEET ontology, GeoSciML's lithology ontology, Tematres controlled vocabulary server, GeoNames, GCMD vocabularies on equipment, platforms and institutions, software ontology, CUAHSI hydrology vocabulary, the environmental ontology (ENVO) and several more. These ontologies are connected through bridging axioms; GeoSciGraph identifies lexically close terms and creates equivalence class or subclass relationships between them after human verification. GeoSciGraph allows a community to create community-specific customizations of the integrated ontology. GeoSciGraph uses the Neo4J,a graph database that can hold several billion concepts and relationships. GeoSciGraph provides a number of REST services that can be called by other software modules like the CINERGI information augmentation pipeline. 1) Vocabulary services are used to find exact and approximate terms, term categories (community-provided clusters of terms e.g., measurement-related terms or environmental material related terms), synonyms, term definitions and annotations. 2) Lexical services are used for text parsing to find entities, which can then be included into the ontology by a domain expert. 3) Graph services provide the ability to perform traversal centric operations e.g., finding paths and neighborhoods which can be used to perform ontological operations like computing transitive closure (e.g., finding all subclasses of rocks). 4) Annotation services are used to adorn an arbitrary block of text (e.g., from a NOAA catalog record) with ontology terms. The system has been used to ontologically integrate diverse sources like Science-base, NOAA records, PETDB.
Characterizing Learning Environments Capable of Nurturing Generic Capabilities in Higher Education
ERIC Educational Resources Information Center
Kember, David; Leung, Doris Y. P.; Ma, Rosa S. F.
2007-01-01
There has been wide recognition that today's graduates need the type of generic capabilities necessary for lifelong learning. However, the mechanism by which universities can develop these generic skills is not clearly established. This study aimed to investigate the mechanism for their development. Structural equation modeling (SEM) was used to…
Forecasting the (un)productivity of the 2014 M 6.0 South Napa aftershock sequence
Llenos, Andrea L.; Michael, Andrew J.
2017-01-01
The 24 August 2014 Mw 6.0 South Napa mainshock produced fewer aftershocks than expected for a California earthquake of its magnitude. In the first 4.5 days, only 59 M≥1.8 aftershocks occurred, the largest of which was an M 3.9 that happened a little over two days after the mainshock. We investigate the aftershock productivity of the South Napa sequence and compare it with other M≥5.5 California strike‐slip mainshock–aftershock sequences. While the productivity of the South Napa sequence is among the lowest, northern California mainshocks generally have fewer aftershocks than mainshocks further south, although the productivities vary widely in both regions. An epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988) fit to Napa seismicity from 1980 to 23 August 2014 fits the sequence well and suggests that low‐productivity sequences are typical of this area. Utilizing regional variations in productivity could improve operational earthquake forecasting (OEF) by improving the model used immediately after the mainshock. We show this by comparing the daily rate of M≥2 aftershocks to forecasts made with the generic California model (Reasenberg and Jones, 1989; hereafter, RJ89), RJ89 models with productivity updated daily, a generic California ETAS model, an ETAS model based on premainshock seismicity, and ETAS models updated daily following the mainshock. RJ89 models for which only the productivity is updated provide better forecasts than the generic RJ89 California model, and the Napa‐specific ETAS models forecast the aftershock rates more accurately than either generic model. Therefore, forecasts that use localized initial parameters and that rapidly update the productivity may be better for OEF than using a generic model and/or updating all parameters.