Science.gov

Sample records for random graph model

  1. Threshold Graph Limits and Random Threshold Graphs

    PubMed Central

    Diaconis, Persi; Holmes, Susan; Janson, Svante

    2010-01-01

    We study the limit theory of large threshold graphs and apply this to a variety of models for random threshold graphs. The results give a nice set of examples for the emerging theory of graph limits. PMID:20811581

  2. Asymptotic Structure of Constrained Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Zhu, Lingjiong

    2017-03-01

    In this paper, we study exponential random graph models subject to certain constraints. We obtain some general results about the asymptotic structure of the model. We show that there exists non-trivial regions in the phase plane where the asymptotic structure is uniform and there also exists non-trivial regions in the phase plane where the asymptotic structure is non-uniform. We will get more refined results for the star model and in particular the two-star model for which a sharp transition from uniform to non-uniform structure, a stationary point and phase transitions will be obtained.

  3. Exponential-family random graph models for valued networks

    PubMed Central

    Krivitsky, Pavel N.

    2013-01-01

    Exponential-family random graph models (ERGMs) provide a principled and flexible way to model and simulate features common in social networks, such as propensities for homophily, mutuality, and friend-of-a-friend triad closure, through choice of model terms (sufficient statistics). However, those ERGMs modeling the more complex features have, to date, been limited to binary data: presence or absence of ties. Thus, analysis of valued networks, such as those where counts, measurements, or ranks are observed, has necessitated dichotomizing them, losing information and introducing biases. In this work, we generalize ERGMs to valued networks. Focusing on modeling counts, we formulate an ERGM for networks whose ties are counts and discuss issues that arise when moving beyond the binary case. We introduce model terms that generalize and model common social network features for such data and apply these methods to a network dataset whose values are counts of interactions. PMID:24678374

  4. Application of statistical physics to random graph models of networks

    NASA Astrophysics Data System (ADS)

    Sreenivasan, Sameet

    This thesis deals with the application of concepts from statistical physics to the understanding of static and dynamical properties of random networks. The classical paradigm for random networks is the Erdos-Renyi (ER) random graph model denoted as G(N, p), in which a network of N nodes is created by placing a link between each of the N(N--1)/2 pairs of nodes with a probability p. The probability distribution of the number of links per node, or the degree distribution, is a Poissonian distribution in the limit of asymptotic network sizes. Recent investigations of the structure of networks such as the internet have revealed a power law in the degree distribution of the network. The question then arises as how the presence of this power law affects the behavior of static and dynamic properties of a network and how this behavior is different from that seen in ER random graphs. In general, irrespective of other details of their structure, networks having a power law degree distribution are known as "scale-free" (SF) networks. In this thesis, we focus on the simplest model of SF networks, known as the configuration model. In the first chapter, we introduce ER and SF networks, and define central concepts that will be used throughout this thesis. In the second chapter we address the problem of optimal paths on weighted networks, formulated as follows. On a network with weighted links where link weights represent transit times along the link, we define the optimal path as the path between two nodes with the least total transit time. We study the scaling of optimal path length ℓopt as a function of the network size N, and as a function of the parameters in the weight distribution. We show that when link weights are highly disordered, only paths on the "minimal spanning tree"---the tree with the lowest total link weight---are used, and this leads to a crossover between two regimes of scaling behavior for ℓopt. For a simple distribution of link weights, we derive for ER

  5. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  6. Local dependence in random graph models: characterization, properties and statistical inference.

    PubMed

    Schweinberger, Michael; Handcock, Mark S

    2015-06-01

    Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with 'ground truth'.

  7. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  8. Synchronizability of random rectangular graphs

    SciTech Connect

    Estrada, Ernesto Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  9. Random broadcast on random geometric graphs

    SciTech Connect

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  10. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  11. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  12. Effect of disorder on condensation in the lattice gas model on a random graph

    NASA Astrophysics Data System (ADS)

    Handford, Thomas P.; Dear, Alexander; Pérez-Reche, Francisco J.; Taraskin, Sergei N.

    2014-07-01

    The lattice gas model of condensation in a heterogeneous pore system, represented by a random graph of cells, is studied using an exact analytical solution. A binary mixture of pore cells with different coordination numbers is shown to exhibit two phase transitions as a function of chemical potential in a certain temperature range. Heterogeneity in interaction strengths is demonstrated to reduce the critical temperature and, for large-enough degreeS of disorder, divides the cells into ones which are either on average occupied or unoccupied. Despite treating the pore space loops in a simplified manner, the random-graph model provides a good description of condensation in porous structures containing loops. This is illustrated by considering capillary condensation in a structural model of mesoporous silica SBA-15.

  13. Approximating the XY model on a random graph with a q -state clock model

    NASA Astrophysics Data System (ADS)

    Lupo, Cosimo; Ricci-Tersenghi, Federico

    2017-02-01

    Numerical simulations of spin glass models with continuous variables set the problem of a reliable but efficient discretization of such variables. In particular, the main question is how fast physical observables computed in the discretized model converge toward the ones of the continuous model when the number of states of the discretized model increases. We answer this question for the XY model and its discretization, the q -state clock model, in the mean-field setting provided by random graphs. It is found that the convergence of physical observables is exponentially fast in the number q of states of the clock model, so allowing a very reliable approximation of the XY model by using a rather small number of states. Furthermore, such an exponential convergence is found to be independent from the disorder distribution used. Only at T =0 , the convergence is slightly slower (stretched exponential). Thanks to the analytical solution to the q -state clock model, we compute accurate phase diagrams in the temperature versus disorder strength plane. We find that, at zero temperature, spontaneous replica symmetry breaking takes place for any amount of disorder, even an infinitesimal one. We also study the one step of replica symmetry breaking (1RSB) solution in the low-temperature spin glass phase.

  14. Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs

    NASA Astrophysics Data System (ADS)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa

    2016-11-01

    We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.

  15. Antiferromagnetic Potts Model on the Erdős-Rényi Random Graph

    NASA Astrophysics Data System (ADS)

    Contucci, Pierluigi; Dommers, Sander; Giardinà, Cristian; Starr, Shannon

    2013-10-01

    We study the antiferromagnetic Potts model on the Poissonian Erdős-Rényi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.

  16. Random Graphs Associated to Some Discrete and Continuous Time Preferential Attachment Models

    NASA Astrophysics Data System (ADS)

    Pachon, Angelica; Polito, Federico; Sacerdote, Laura

    2016-03-01

    We give a common description of Simon, Barabási-Albert, II-PA and Price growth models, by introducing suitable random graph processes with preferential attachment mechanisms. Through the II-PA model, we prove the conditions for which the asymptotic degree distribution of the Barabási-Albert model coincides with the asymptotic in-degree distribution of the Simon model. Furthermore, we show that when the number of vertices in the Simon model (with parameter α ) goes to infinity, a portion of them behave as a Yule model with parameters (λ ,β ) = (1-α ,1), and through this relation we explain why asymptotic properties of a random vertex in Simon model, coincide with the asymptotic properties of a random genus in Yule model. As a by-product of our analysis, we prove the explicit expression of the in-degree distribution for the II-PA model, given without proof in Newman (Contemp Phys 46:323-351, 2005). References to traditional and recent applications of the these models are also discussed.

  17. Chromatic polynomials of random graphs

    NASA Astrophysics Data System (ADS)

    Van Bussel, Frank; Ehrlich, Christoph; Fliegner, Denny; Stolzenberg, Sebastian; Timme, Marc

    2010-04-01

    Chromatic polynomials and related graph invariants are central objects in both graph theory and statistical physics. Computational difficulties, however, have so far restricted studies of such polynomials to graphs that were either very small, very sparse or highly structured. Recent algorithmic advances (Timme et al 2009 New J. Phys. 11 023001) now make it possible to compute chromatic polynomials for moderately sized graphs of arbitrary structure and number of edges. Here we present chromatic polynomials of ensembles of random graphs with up to 30 vertices, over the entire range of edge density. We specifically focus on the locations of the zeros of the polynomial in the complex plane. The results indicate that the chromatic zeros of random graphs have a very consistent layout. In particular, the crossing point, the point at which the chromatic zeros with non-zero imaginary part approach the real axis, scales linearly with the average degree over most of the density range. While the scaling laws obtained are purely empirical, if they continue to hold in general there are significant implications: the crossing points of chromatic zeros in the thermodynamic limit separate systems with zero ground state entropy from systems with positive ground state entropy, the latter an exception to the third law of thermodynamics.

  18. Limitations of individual causal models, causal graphs, and ignorability assumptions, as illustrated by random confounding and design unfaithfulness.

    PubMed

    Greenland, Sander; Mansournia, Mohammad Ali

    2015-10-01

    We describe how ordinary interpretations of causal models and causal graphs fail to capture important distinctions among ignorable allocation mechanisms for subject selection or allocation. We illustrate these limitations in the case of random confounding and designs that prevent such confounding. In many experimental designs individual treatment allocations are dependent, and explicit population models are needed to show this dependency. In particular, certain designs impose unfaithful covariate-treatment distributions to prevent random confounding, yet ordinary causal graphs cannot discriminate between these unconfounded designs and confounded studies. Causal models for populations are better suited for displaying these phenomena than are individual-level models, because they allow representation of allocation dependencies as well as outcome dependencies across individuals. Nonetheless, even with this extension, ordinary graphical models still fail to capture distinctions between hypothetical superpopulations (sampling distributions) and observed populations (actual distributions), although potential-outcome models can be adapted to show these distinctions and their consequences.

  19. A Note on Dynamical Models on Random Graphs and Fokker-Planck Equations

    NASA Astrophysics Data System (ADS)

    Delattre, Sylvain; Giacomin, Giambattista; Luçon, Eric

    2016-11-01

    We address the issue of the proximity of interacting diffusion models on large graphs with a uniform degree property and a corresponding mean field model, i.e., a model on the complete graph with a suitably renormalized interaction parameter. Examples include Erdős-Rényi graphs with edge probability p_n, n is the number of vertices, such that lim _{n → ∞}p_n n= ∞ . The purpose of this note is twofold: (1) to establish this proximity on finite time horizon, by exploiting the fact that both systems are accurately described by a Fokker-Planck PDE (or, equivalently, by a nonlinear diffusion process) in the n=∞ limit; (2) to remark that in reality this result is unsatisfactory when it comes to applying it to systems with n large but finite, for example the values of n that can be reached in simulations or that correspond to the typical number of interacting units in a biological system.

  20. Index statistical properties of sparse random graphs

    NASA Astrophysics Data System (ADS)

    Metz, F. L.; Stariolo, Daniel A.

    2015-10-01

    Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.

  1. Prediction of Pig Trade Movements in Different European Production Systems Using Exponential Random Graph Models

    PubMed Central

    Relun, Anne; Grosbois, Vladimir; Alexandrov, Tsviatko; Sánchez-Vizcaíno, Jose M.; Waret-Szkuta, Agnes; Molia, Sophie; Etter, Eric Marcel Charles; Martínez-López, Beatriz

    2017-01-01

    In most European countries, data regarding movements of live animals are routinely collected and can greatly aid predictive epidemic modeling. However, the use of complete movements’ dataset to conduct policy-relevant predictions has been so far limited by the massive amount of data that have to be processed (e.g., in intensive commercial systems) or the restricted availability of timely and updated records on animal movements (e.g., in areas where small-scale or extensive production is predominant). The aim of this study was to use exponential random graph models (ERGMs) to reproduce, understand, and predict pig trade networks in different European production systems. Three trade networks were built by aggregating movements of pig batches among premises (farms and trade operators) over 2011 in Bulgaria, Extremadura (Spain), and Côtes-d’Armor (France), where small-scale, extensive, and intensive pig production are predominant, respectively. Three ERGMs were fitted to each network with various demographic and geographic attributes of the nodes as well as six internal network configurations. Several statistical and graphical diagnostic methods were applied to assess the goodness of fit of the models. For all systems, both exogenous (attribute-based) and endogenous (network-based) processes appeared to govern the structure of pig trade network, and neither alone were capable of capturing all aspects of the network structure. Geographic mixing patterns strongly structured pig trade organization in the small-scale production system, whereas belonging to the same company or keeping pigs in the same housing system appeared to be key drivers of pig trade, in intensive and extensive production systems, respectively. Heterogeneous mixing between types of production also explained a part of network structure, whichever production system considered. Limited information is thus needed to capture most of the global structure of pig trade networks. Such findings will be

  2. Random geometric graphs with general connection functions

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.; Georgiou, Orestis

    2016-03-01

    In the original (1961) Gilbert model of random geometric graphs, nodes are placed according to a Poisson point process, and links formed between those within a fixed range. Motivated by wireless ad hoc networks "soft" or "probabilistic" connection models have recently been introduced, involving a "connection function" H (r ) that gives the probability that two nodes at distance r are linked (directly connect). In many applications (not only wireless networks), it is desirable that the graph is connected; that is, every node is linked to every other node in a multihop fashion. Here the connection probability of a dense network in a convex domain in two or three dimensions is expressed in terms of contributions from boundary components for a very general class of connection functions. It turns out that only a few quantities such as moments of the connection function appear. Good agreement is found with special cases from previous studies and with numerical simulations.

  3. Replica methods for loopy sparse random graphs

    NASA Astrophysics Data System (ADS)

    Coolen, ACC

    2016-03-01

    I report on the development of a novel statistical mechanical formalism for the analysis of random graphs with many short loops, and processes on such graphs. The graphs are defined via maximum entropy ensembles, in which both the degrees (via hard constraints) and the adjacency matrix spectrum (via a soft constraint) are prescribed. The sum over graphs can be done analytically, using a replica formalism with complex replica dimensions. All known results for tree-like graphs are recovered in a suitable limit. For loopy graphs, the emerging theory has an appealing and intuitive structure, suggests how message passing algorithms should be adapted, and what is the structure of theories describing spin systems on loopy architectures. However, the formalism is still largely untested, and may require further adjustment and refinement. This paper is dedicated to the memory of our colleague and friend Jun-Ichi Inoue, with whom the author has had the great pleasure and privilege of collaborating.

  4. Component evolution in general random intersection graphs

    SciTech Connect

    Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick; Percus, Allon G

    2010-01-01

    We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes and different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.

  5. Network Statistical Models for Language Learning Contexts: Exponential Random Graph Models and Willingness to Communicate

    ERIC Educational Resources Information Center

    Gallagher, H. Colin; Robins, Garry

    2015-01-01

    As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…

  6. A Simulation Study Comparing Epidemic Dynamics on Exponential Random Graph and Edge-Triangle Configuration Type Contact Network Models

    PubMed Central

    Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry

    2015-01-01

    We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701

  7. Quantum graphs and random-matrix theory

    NASA Astrophysics Data System (ADS)

    Pluhař, Z.; Weidenmüller, H. A.

    2015-07-01

    For simple connected graphs with incommensurate bond lengths and with unitary symmetry we prove the Bohigas-Giannoni-Schmit (BGS) conjecture in its most general form. Using supersymmetry and taking the limit of infinite graph size, we show that the generating function for every (P,Q) correlation function for both closed and open graphs coincides with the corresponding expression of random-matrix theory. We show that the classical Perron-Frobenius operator is bistochastic and possesses a single eigenvalue +1. In the quantum case that implies the existence of a zero (or massless) mode of the effective action. That mode causes universal fluctuation properties. Avoiding the saddle-point approximation we show that for graphs that are classically mixing (i.e. for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap) and that do not carry a special class of bound states, the zero mode dominates in the limit of infinite graph size.

  8. Clique percolation in random graphs

    NASA Astrophysics Data System (ADS)

    Li, Ming; Deng, Youjin; Wang, Bing-Hong

    2015-10-01

    As a generation of the classical percolation, clique percolation focuses on the connection of cliques in a graph, where the connection of two k cliques means that they share at least l graphs, which gives not only the exact solutions of the critical point, but also the corresponding order parameter. Based on this, we prove theoretically that the fraction ψ of cliques in the giant clique cluster always makes a continuous phase transition as the classical percolation. However, the fraction ϕ of vertices in the giant clique cluster for l >1 makes a step-function-like discontinuous phase transition in the thermodynamic limit and a continuous phase transition for l =1 . More interesting, our analysis shows that at the critical point, the order parameter ϕc for l >1 is neither 0 nor 1, but a constant depending on k and l . All these theoretical findings are in agreement with the simulation results, which give theoretical support and clarification for previous simulation studies of clique percolation.

  9. Clique percolation in random graphs.

    PubMed

    Li, Ming; Deng, Youjin; Wang, Bing-Hong

    2015-10-01

    As a generation of the classical percolation, clique percolation focuses on the connection of cliques in a graph, where the connection of two k cliques means that they share at least lgraphs, which gives not only the exact solutions of the critical point, but also the corresponding order parameter. Based on this, we prove theoretically that the fraction ψ of cliques in the giant clique cluster always makes a continuous phase transition as the classical percolation. However, the fraction ϕ of vertices in the giant clique cluster for l>1 makes a step-function-like discontinuous phase transition in the thermodynamic limit and a continuous phase transition for l=1. More interesting, our analysis shows that at the critical point, the order parameter ϕ(c) for l>1 is neither 0 nor 1, but a constant depending on k and l. All these theoretical findings are in agreement with the simulation results, which give theoretical support and clarification for previous simulation studies of clique percolation.

  10. Network robustness and fragility: percolation on random graphs.

    PubMed

    Callaway, D S; Newman, M E; Strogatz, S H; Watts, D J

    2000-12-18

    Recent work on the Internet, social networks, and the power grid has addressed the resilience of these networks to either random or targeted deletion of network nodes or links. Such deletions include, for example, the failure of Internet routers or power transmission lines. Percolation models on random graphs provide a simple representation of this process but have typically been limited to graphs with Poisson degree distribution at their vertices. Such graphs are quite unlike real-world networks, which often possess power-law or other highly skewed degree distributions. In this paper we study percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolation, bond percolation, and models in which occupation probabilities depend on vertex degree. We discuss the application of our theory to the understanding of network resilience.

  11. Ancestral recombinations graph: a reconstructability perspective using random-graphs framework.

    PubMed

    Parida, Laxmi

    2010-10-01

    We present a random graphs framework to study pedigree history in an ideal (Wright Fisher) population. This framework correlates the underlying mathematical objects in, for example, pedigree graph, mtDNA or NRY Chr tree, ARG (Ancestral Recombinations Graph), and HUD used in literature, into a single unified random graph framework. It also gives a natural definition, based solely on the topology, of an ARG, one of the most interesting as well as useful mathematical objects in this area. The random graphs framework gives an alternative parametrization of the ARG that does not use the recombination rate q and instead uses a parameter M based on the (estimate of ) the number of non-mixing segments in the extant units. This seems more natural in a setting that attempts to tease apart the population dynamics from the biology of the units. This framework also gives a purely topological definition of GMRCA, analogous to MRCA on trees (which has a purely topological description i.e., it is a root, graph-theoretically speaking, of a tree). Secondly, with a natural extension of the ideas from random-graphs we present a sampling (simulation) algorithm to construct random instances of ARG/unilinear transmission graph. This is the first (to the best of the author's knowledge) algorithm that guarantees uniform sampling of the space of ARG instances, reflecting the ideal population model. Finally, using a measure of reconstructability of the past historical events given a collection of extant sequences, we conclude for a given set of extant sequences, the joint history of local segments along a chromosome is reconstructible.

  12. Absence of first order transition in the random crystal field Blume-Capel model on a fully connected graph

    NASA Astrophysics Data System (ADS)

    Sumedha; Jana, Nabin Kumar

    2017-01-01

    In this paper we solve the Blume-Capel model on a complete graph in the presence of random crystal field with a distribution, P≤ft({{ Δ }i}\\right)=pδ ≤ft({{ Δ }i}- Δ \\right)+(1-p)δ ≤ft({{ Δ }i}+ Δ \\right) , using large deviation techniques. We find that the first order transition of the pure system is destroyed for 0.046  <  p  <  0.954 for all values of the crystal field, Δ . The system has a line of continuous transition for this range of p from -∞ < Δ <∞ . For values of p outside this interval, the phase diagram of the system is similar to the pure model, with a tricritical point separating the line of first order and continuous transitions. We find that in this regime, the order vanishes for large Δ for p  <  0.046(and for large - Δ for p  >  0.954) even at zero temperature.

  13. Random walk on lattices: graph-theoretic approach to simulating long-range diffusion-attachment growth models.

    PubMed

    Limkumnerd, Surachate

    2014-03-01

    Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.

  14. Random walk on lattices: Graph-theoretic approach to simulating long-range diffusion-attachment growth models

    NASA Astrophysics Data System (ADS)

    Limkumnerd, Surachate

    2014-03-01

    Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.

  15. Exact and approximate graph matching using random walks.

    PubMed

    Gori, Marco; Maggini, Marco; Sarti, Lorenzo

    2005-07-01

    In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.

  16. The Condensation Phase Transition in Random Graph Coloring

    NASA Astrophysics Data System (ADS)

    Bapst, Victor; Coja-Oghlan, Amin; Hetterich, Samuel; Raßmann, Felicia; Vilenchik, Dan

    2016-01-01

    Based on a non-rigorous formalism called the "cavity method", physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős-Renyi random graph G( n, d/ n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition (Krzakala et al. in Proc Natl Acad Sci 104:10318-10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k 0.

  17. Clifford Algebras, Random Graphs, and Quantum Random Variables

    NASA Astrophysics Data System (ADS)

    Schott, René; Staples, G. Stacey

    2008-08-01

    For fixed n > 0, the space of finite graphs on n vertices is canonically associated with an abelian, nilpotent-generated subalgebra of the Clifford algebra {C}l2n,2n which is canonically isomorphic to the 2n-particle fermion algebra. Using the generators of the subalgebra, an algebraic probability space of "Clifford adjacency matrices" associated with finite graphs is defined. Each Clifford adjacency matrix is a quantum random variable whose mth moment corresponds to the number of m-cycles in the graph G. Each matrix admits a canonical "quantum decomposition" into a sum of three algebraic random variables: a = aΔ + aΥ + aΛ, where aΔ is classical while aΥ and aΛ are quantum. Moreover, within the Clifford algebra context the NP problem of cycle enumeration is reduced to matrix multiplication, requiring no more than n4 Clifford (geo-metric) multiplications within the algebra.

  18. Ensemble nonequivalence in random graphs with modular structure

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; den Hollander, Frank; Roccaverde, Andrea

    2017-01-01

    Breaking of equivalence between the microcanonical ensemble and the canonical ensemble, describing a large system subject to hard and soft constraints, respectively, was recently shown to occur in large random graphs. Hard constraints must be met by every graph, soft constraints must be met only on average, subject to maximal entropy. In Squartini, de Mol, den Hollander and Garlaschelli (2015 New J. Phys. 17 023052) it was shown that ensembles of random graphs are nonequivalent when the degrees of the nodes are constrained, in the sense of a non-zero limiting specific relative entropy as the number of nodes diverges. In that paper, the nodes were placed either on a single layer (uni-partite graphs) or on two layers (bi-partite graphs). In the present paper we consider an arbitrary number of intra-connected and inter-connected layers, thus allowing for modular graphs with a multi-partite, multiplex, time-varying, block-model or community structure. We give a full classification of ensemble equivalence in the sparse regime, proving that breakdown occurs as soon as the number of local constraints (i.e. the number of constrained degrees) is extensive in the number of nodes, irrespective of the layer structure. In addition, we derive an explicit formula for the specific relative entropy and provide an interpretation of this formula in terms of Poissonisation of the degrees.

  19. Topic Model for Graph Mining.

    PubMed

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng

    2015-12-01

    Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.

  20. Efficient broadcast on random geometric graphs

    SciTech Connect

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias; Sauerwald, Thomas

    2009-01-01

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the critical value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).

  1. General and exact approach to percolation on random graphs

    NASA Astrophysics Data System (ADS)

    Allard, Antoine; Hébert-Dufresne, Laurent; Young, Jean-Gabriel; Dubé, Louis J.

    2015-12-01

    We present a comprehensive and versatile theoretical framework to study site and bond percolation on clustered and correlated random graphs. Our contribution can be summarized in three main points. (i) We introduce a set of iterative equations that solve the exact distribution of the size and composition of components in finite-size quenched or random multitype graphs. (ii) We define a very general random graph ensemble that encompasses most of the models published to this day and also makes it possible to model structural properties not yet included in a theoretical framework. Site and bond percolation on this ensemble is solved exactly in the infinite-size limit using probability generating functions [i.e., the percolation threshold, the size, and the composition of the giant (extensive) and small components]. Several examples and applications are also provided. (iii) Our approach can be adapted to model interdependent graphs—whose most striking feature is the emergence of an extensive component via a discontinuous phase transition—in an equally general fashion. We show how a graph can successively undergo a continuous then a discontinuous phase transition, and preliminary results suggest that clustering increases the amplitude of the discontinuity at the transition.

  2. Cross over of recurrence networks to random graphs and random geometric graphs

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2017-02-01

    Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.

  3. Protein localization prediction using random walks on graphs

    PubMed Central

    2013-01-01

    Background Understanding the localization of proteins in cells is vital to characterizing their functions and possible interactions. As a result, identifying the (sub)cellular compartment within which a protein is located becomes an important problem in protein classification. This classification issue thus involves predicting labels in a dataset with a limited number of labeled data points available. By utilizing a graph representation of protein data, random walk techniques have performed well in sequence classification and functional prediction; however, this method has not yet been applied to protein localization. Accordingly, we propose a novel classifier in the site prediction of proteins based on random walks on a graph. Results We propose a graph theory model for predicting protein localization using data generated in yeast and gram-negative (Gneg) bacteria. We tested the performance of our classifier on the two datasets, optimizing the model training parameters by varying the laziness values and the number of steps taken during the random walk. Using 10-fold cross-validation, we achieved an accuracy of above 61% for yeast data and about 93% for gram-negative bacteria. Conclusions This study presents a new classifier derived from the random walk technique and applies this classifier to investigate the cellular localization of proteins. The prediction accuracy and additional validation demonstrate an improvement over previous methods, such as support vector machine (SVM)-based classifiers. PMID:23815126

  4. Connectivity of Soft Random Geometric Graphs over Annuli

    NASA Astrophysics Data System (ADS)

    Giles, Alexander P.; Georgiou, Orestis; Dettmann, Carl P.

    2016-02-01

    Nodes are randomly distributed within an annulus (and then a shell) to form a point pattern of communication terminals which are linked stochastically according to the Rayleigh fading of radio-frequency data signals. We then present analytic formulas for the connection probability of these spatially embedded graphs, describing the connectivity behaviour as a dense-network limit is approached. This extends recent work modelling ad hoc networks in non-convex domains.

  5. Topics in networks: Community detection, random graphs, and network epidemiology

    NASA Astrophysics Data System (ADS)

    Karrer, Brian C.

    In this dissertation, we present research on several topics in networks including community detection, random graphs, and network epidemiology. Traditional stochastic blockmodels may produce inaccurate fits to complex networks with heterogeneous degree distributions and we devise a degree-corrected block-model that alleviates this problematic behavior. The resulting objective function for community detection using the degree-corrected version outperforms the traditional model at finding communities on a variety of real-world and synthetic tests. Then we study a different generative model that associates communities to the edges of the network and naturally includes overlapping vertex communities. We create a fast and accurate algorithm to fit this model to empirical networks and show that it can be used to quickly find non-overlapping communities as well. We also develop random graph models for directed acyclic graphs, a class of networks including family trees and citation networks. We argue that the lack of cycles comes from an ordering constraint and then generalize the configuration model to incorporate this constraint. We calculate many properties of these models and demonstrate that sonic of the model predictions agree quite well with real-world networks, emphasizing the importance of vertex ordering to generating directed acyclic networks with realistic properties. Finally, we examine the spread of disease over networks, starting with a simple model of two diseases spreading with cross-immunity, where infection by one disease makes an individual immune to the other disease and vice versa. Utilizing a timescale separation argument, we map the system to consecutive bond percolation, one disease spreading after the other. The resulting phase diagram includes discontinuous and continuous phase transitions and a coexistence region where both diseases can spread to a substantial fraction of the population. Then we analyze a flexible susceptible

  6. Ancestral Graph Markov Models

    DTIC Science & Technology

    2002-04-15

    path from a to (3 together with an edge (3 -+ a is called a (fully) directed cycle . An anterior path from a to f3 together with an edge (3 -+ a is...called a partially directed cycle . A directed acyclic graph (DA G) is a mixed graph in which all edges are directed, and there are no directed cycles . 3...regardless of whether a and "’f are adjacent). There are no directed cycles or pa’ltially directed cycles . 9 to Proof: follows because condition rules

  7. Random graphs with arbitrary degree distributions and their applications

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.; Strogatz, S. H.; Watts, D. J.

    2001-08-01

    Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the world-wide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.

  8. The Lexical Restructuring Hypothesis and Graph Theoretic Analyses of Networks Based on Random Lexicons

    ERIC Educational Resources Information Center

    Gruenenfelder, Thomas M.; Pisoni, David B.

    2009-01-01

    Purpose: The mental lexicon of words used for spoken word recognition has been modeled as a complex network or graph. Do the characteristics of that graph reflect processes involved in its growth (M. S. Vitevitch, 2008) or simply the phonetic overlap between similar-sounding words? Method: Three pseudolexicons were generated by randomly selecting…

  9. Motifs in triadic random graphs based on Steiner triple systems

    NASA Astrophysics Data System (ADS)

    Winkler, Marco; Reichardt, Jörg

    2013-08-01

    Conventionally, pairwise relationships between nodes are considered to be the fundamental building blocks of complex networks. However, over the last decade, the overabundance of certain subnetwork patterns, i.e., the so-called motifs, has attracted much attention. It has been hypothesized that these motifs, instead of links, serve as the building blocks of network structures. Although the relation between a network's topology and the general properties of the system, such as its function, its robustness against perturbations, or its efficiency in spreading information, is the central theme of network science, there is still a lack of sound generative models needed for testing the functional role of subgraph motifs. Our work aims to overcome this limitation. We employ the framework of exponential random graph models (ERGMs) to define models based on triadic substructures. The fact that only a small portion of triads can actually be set independently poses a challenge for the formulation of such models. To overcome this obstacle, we use Steiner triple systems (STSs). These are partitions of sets of nodes into pair-disjoint triads, which thus can be specified independently. Combining the concepts of ERGMs and STSs, we suggest generative models capable of generating ensembles of networks with nontrivial triadic Z-score profiles. Further, we discover inevitable correlations between the abundance of triad patterns, which occur solely for statistical reasons and need to be taken into account when discussing the functional implications of motif statistics. Moreover, we calculate the degree distributions of our triadic random graphs analytically.

  10. Parameter Tuning Patterns for Random Graph Coloring with Quantum Annealing

    PubMed Central

    Titiloye, Olawale; Crispin, Alan

    2012-01-01

    Quantum annealing is a combinatorial optimization technique inspired by quantum mechanics. Here we show that a spin model for the k-coloring of large dense random graphs can be field tuned so that its acceptance ratio diverges during Monte Carlo quantum annealing, until a ground state is reached. We also find that simulations exhibiting such a diverging acceptance ratio are generally more effective than those tuned to the more conventional pattern of a declining and/or stagnating acceptance ratio. This observation facilitates the discovery of solutions to several well-known benchmark k-coloring instances, some of which have been open for almost two decades. PMID:23166818

  11. Constrained Markovian Dynamics of Random Graphs

    NASA Astrophysics Data System (ADS)

    Coolen, A. C. C.; de Martino, A.; Annibale, A.

    2009-09-01

    We introduce a statistical mechanics formalism for the study of constrained graph evolution as a Markovian stochastic process, in analogy with that available for spin systems, deriving its basic properties and highlighting the role of the `mobility' (the number of allowed moves for any given graph). As an application of the general theory we analyze the properties of degree-preserving Markov chains based on elementary edge switchings. We give an exact yet simple formula for the mobility in terms of the graph's adjacency matrix and its spectrum. This formula allows us to define acceptance probabilities for edge switchings, such that the Markov chains become controlled Glauber-type detailed balance processes, designed to evolve to any required invariant measure (representing the asymptotic frequencies with which the allowed graphs are visited during the process). As a corollary we also derive a condition in terms of simple degree statistics, sufficient to guarantee that, in the limit where the number of nodes diverges, even for state-independent acceptance probabilities of proposed moves the invariant measure of the process will be uniform. We test our theory on synthetic graphs and on realistic larger graphs as studied in cellular biology, showing explicitly that, for instances where the simple edge swap dynamics fails to converge to the uniform measure, a suitably modified Markov chain instead generates the correct phase space sampling.

  12. Graph modeling systems and methods

    DOEpatents

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  13. Exploring community structure in biological networks with random graphs

    PubMed Central

    2014-01-01

    Background Community structure is ubiquitous in biological networks. There has been an increased interest in unraveling the community structure of biological systems as it may provide important insights into a system’s functional components and the impact of local structures on dynamics at a global scale. Choosing an appropriate community detection algorithm to identify the community structure in an empirical network can be difficult, however, as the many algorithms available are based on a variety of cost functions and are difficult to validate. Even when community structure is identified in an empirical system, disentangling the effect of community structure from other network properties such as clustering coefficient and assortativity can be a challenge. Results Here, we develop a generative model to produce undirected, simple, connected graphs with a specified degrees and pattern of communities, while maintaining a graph structure that is as random as possible. Additionally, we demonstrate two important applications of our model: (a) to generate networks that can be used to benchmark existing and new algorithms for detecting communities in biological networks; and (b) to generate null models to serve as random controls when investigating the impact of complex network features beyond the byproduct of degree and modularity in empirical biological networks. Conclusion Our model allows for the systematic study of the presence of community structure and its impact on network function and dynamics. This process is a crucial step in unraveling the functional consequences of the structural properties of biological systems and uncovering the mechanisms that drive these systems. PMID:24965130

  14. Ising Critical Exponents on Random Trees and Graphs

    NASA Astrophysics Data System (ADS)

    Dommers, Sander; Giardinà, Cristian; van der Hofstad, Remco

    2014-05-01

    We study the critical behavior of the ferromagnetic Ising model on random trees as well as so-called locally tree-like random graphs. We pay special attention to trees and graphs with a power-law offspring or degree distribution whose tail behavior is characterized by its power-law exponent τ > 2. We show that the critical inverse temperature of the Ising model equals the hyperbolic arctangent of the reciprocal of the mean offspring or mean forward degree distribution. In particular, the critical inverse temperature equals zero when where this mean equals infinity. We further study the critical exponents δ, β and γ, describing how the (root) magnetization behaves close to criticality. We rigorously identify these critical exponents and show that they take the values as predicted by Dorogovstev et al. (Phys Rev E 66:016104, 2002) and Leone et al. (Eur Phys J B 28:191-197, 2002). These values depend on the power-law exponent τ, taking the same values as the mean-field Curie-Weiss model (Exactly solved models in statistical mechanics, Academic Press, London, 1982) for τ > 5, but different values for.

  15. Discontinuous percolation transitions in epidemic processes, surface depinning in random media, and Hamiltonian random graphs

    NASA Astrophysics Data System (ADS)

    Bizhani, Golnoosh; Paczuski, Maya; Grassberger, Peter

    2012-07-01

    Discontinuous percolation transitions and the associated tricritical points are manifest in a wide range of both equilibrium and nonequilibrium cooperative phenomena. To demonstrate this, we present and relate the continuous and first-order behaviors in two different classes of models: The first are generalized epidemic processes that describe in their spatially embedded version—either on or off a regular lattice—compact or fractal cluster growth in random media at zero temperature. A random graph version of these processes is mapped onto a model previously proposed for complex social contagion. We compute detailed phase diagrams and compare our numerical results at the tricritical point in d=3 with field theory predictions of Janssen [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.70.026114 70, 026114 (2004)]. The second class consists of exponential (“Hamiltonian,” i.e., formally equilibrium) random graph models and includes the Strauss and the two-star model, where “chemical potentials” control the densities of links, triangles, or two-stars. When the chemical potentials in either graph model are O(logN), the percolation transition can coincide with a first-order phase transition in the density of links, making the former also discontinuous. Hysteresis loops can then be of mixed order, with second-order behavior for decreasing link fugacity, and a jump (first order) when it increases.

  16. The average distances in random graphs with given expected degrees

    NASA Astrophysics Data System (ADS)

    Chung, Fan; Lu, Linyuan

    2002-12-01

    Random graph theory is used to examine the "small-world phenomenon"; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n/log , where is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k for some fixed exponent . For the case of > 3, we prove that the average distance of the power law graphs is almost surely of order log n/log β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc/log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core.


  17. Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs.

    PubMed

    Schmidt, Deena R; Thomas, Peter J

    2014-04-17

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin-Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.

  18. Some features of the spread of epidemics and information on a random graph

    PubMed Central

    Durrett, Rick

    2010-01-01

    Random graphs are useful models of social and technological networks. To date, most of the research in this area has concerned geometric properties of the graphs. Here we focus on processes taking place on the network. In particular we are interested in how their behavior on networks differs from that in homogeneously mixing populations or on regular lattices of the type commonly used in ecological models. PMID:20167800

  19. The peculiar phase structure of random graph bisection

    SciTech Connect

    Percus, Allon G; Istrate, Gabriel; Goncalves, Bruno T; Sumi, Robert Z

    2008-01-01

    The mincut graph bisection problem involves partitioning the n vertices of a graph into disjoint subsets, each containing exactly n/2 vertices, while minimizing the number of 'cut' edges with an endpoint in each subset. When considered over sparse random graphs, the phase structure of the graph bisection problem displays certain familiar properties, but also some surprises. It is known that when the mean degree is below the critical value of 2 log 2, the cutsize is zero with high probability. We study how the minimum cutsize increases with mean degree above this critical threshold, finding a new analytical upper bound that improves considerably upon previous bounds. Combined with recent results on expander graphs, our bound suggests the unusual scenario that random graph bisection is replica symmetric up to and beyond the critical threshold, with a replica symmetry breaking transition possibly taking place above the threshold. An intriguing algorithmic consequence is that although the problem is NP-hard, we can find near-optimal cutsizes (whose ratio to the optimal value approaches 1 asymptotically) in polynomial time for typical instances near the phase transition.

  20. Graph-Driven Diffusion and Random Walk Schemes for Image Segmentation.

    PubMed

    Bampis, Christos G; Maragos, Petros; Bovik, Alan C

    2016-10-26

    We propose graph-driven approaches to image segmentation by developing diffusion processes defined on arbitrary graphs. We formulate a solution to the image segmentation problem modeled as the result of infectious wavefronts propagating on an image-driven graph where pixels correspond to nodes of an arbitrary graph. By relating the popular Susceptible - Infected - Recovered epidemic propagation model to the Random Walker algorithm, we develop the Normalized Random Walker and a lazy random walker variant. The underlying iterative solutions of these methods are derived as the result of infections transmitted on this arbitrary graph. The main idea is to incorporate a degree-aware term into the original Random Walker algorithm in order to account for the node centrality of every neighboring node and to weigh the contribution of every neighbor to the underlying diffusion process. Our lazy random walk variant models the tendency of patients or nodes to resist changes in their infection status. We also show how previous work can be naturally extended to take advantage of this degreeaware term which enables the design of other novel methods. Through an extensive experimental analysis, we demonstrate the reliability of our approach, its small computational burden and the dimensionality reduction capabilities of graph-driven approaches. Without applying any regular grid constraint, the proposed graph clustering scheme allows us to consider pixellevel, node-level approaches and multidimensional input data by naturally integrating the importance of each node to the final clustering or segmentation solution. A software release containing implementations of this work and supplementary material can be found at: http://cvsp.cs.ntua.gr/research/GraphClustering/.

  1. Critical behaviour of spanning forests on random planar graphs

    NASA Astrophysics Data System (ADS)

    Bondesan, Roberto; Caracciolo, Sergio; Sportiello, Andrea

    2017-02-01

    As a follow-up of previous work of the authors, we analyse the statistical mechanics model of random spanning forests on random planar graphs. Special emphasis is given to the analysis of the critical behaviour. Exploiting an exact relation with a model of \\text{O}(-2) -loops and dimers, previously solved by Kostov and Staudacher, we identify critical and multicritical loci, and find them consistent with recent results of Bousquet-Mélou and Courtiel. This is also consistent with the KPZ relation, and the Berker-Kadanoff phase in the anti-ferromagnetic regime of the Potts Model on periodic lattices, predicted by Saleur. To our knowledge, this is the first known example of KPZ appearing explicitly to work within a Berker-Kadanoff phase. We set up equations for the generating function, at the value t  =  -1 of the fugacity, which is of combinatorial interest, and we investigate the resulting numerical series, a favourite problem of Tony Guttmann’s. Dedicated to Tony Guttmann on the occasion of his 70th birthday.

  2. Nonergodic Phases in Strongly Disordered Random Regular Graphs.

    PubMed

    Altshuler, B L; Cuevas, E; Ioffe, L B; Kravtsov, V E

    2016-10-07

    We combine numerical diagonalization with semianalytical calculations to prove the existence of the intermediate nonergodic but delocalized phase in the Anderson model on disordered hierarchical lattices. We suggest a new generalized population dynamics that is able to detect the violation of ergodicity of the delocalized states within the Abou-Chakra, Anderson, and Thouless recursive scheme. This result is supplemented by statistics of random wave functions extracted from exact diagonalization of the Anderson model on ensemble of disordered random regular graphs (RRG) of N sites with the connectivity K=2. By extrapolation of the results of both approaches to N→∞ we obtain the fractal dimensions D_{1}(W) and D_{2}(W) as well as the population dynamics exponent D(W) with the accuracy sufficient to claim that they are nontrivial in the broad interval of disorder strength W_{E}10^{5} reveals a singularity in D_{1,2}(W) dependencies which provides clear evidence for the first order transition between the two delocalized phases on RRG at W_{E}≈10.0. We discuss the implications of these results for quantum and classical nonintegrable and many-body systems.

  3. Nonergodic Phases in Strongly Disordered Random Regular Graphs

    NASA Astrophysics Data System (ADS)

    Altshuler, B. L.; Cuevas, E.; Ioffe, L. B.; Kravtsov, V. E.

    2016-10-01

    We combine numerical diagonalization with semianalytical calculations to prove the existence of the intermediate nonergodic but delocalized phase in the Anderson model on disordered hierarchical lattices. We suggest a new generalized population dynamics that is able to detect the violation of ergodicity of the delocalized states within the Abou-Chakra, Anderson, and Thouless recursive scheme. This result is supplemented by statistics of random wave functions extracted from exact diagonalization of the Anderson model on ensemble of disordered random regular graphs (RRG) of N sites with the connectivity K =2 . By extrapolation of the results of both approaches to N →∞ we obtain the fractal dimensions D1(W ) and D2(W ) as well as the population dynamics exponent D (W ) with the accuracy sufficient to claim that they are nontrivial in the broad interval of disorder strength WE1 05 reveals a singularity in D1 ,2(W ) dependencies which provides clear evidence for the first order transition between the two delocalized phases on RRG at WE≈10.0 . We discuss the implications of these results for quantum and classical nonintegrable and many-body systems.

  4. Graph Theoretical Model of a Sensorimotor Connectome in Zebrafish

    PubMed Central

    Stobb, Michael; Peterson, Joshua M.; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome. PMID:22624008

  5. Graph theoretical model of a sensorimotor connectome in zebrafish.

    PubMed

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  6. Generalized Random Sequential Adsorption on Erdős-Rényi Random Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2016-09-01

    We investigate random sequential adsorption (RSA) on a random graph via the following greedy algorithm: Order the n vertices at random, and sequentially declare each vertex either active or frozen, depending on some local rule in terms of the state of the neighboring vertices. The classical RSA rule declares a vertex active if none of its neighbors is, in which case the set of active nodes forms an independent set of the graph. We generalize this nearest-neighbor blocking rule in three ways and apply it to the Erdős-Rényi random graph. We consider these generalizations in the large-graph limit n→ ∞ and characterize the jamming constant, the limiting proportion of active vertices in the maximal greedy set.

  7. Anderson localization and ergodicity on random regular graphs

    NASA Astrophysics Data System (ADS)

    Tikhonov, K. Â. S.; Mirlin, A. Â. D.; Skvortsov, M. Â. A.

    2016-12-01

    A numerical study of Anderson transition on random regular graphs (RRGs) with diagonal disorder is performed. The problem can be described as a tight-binding model on a lattice with N sites that is locally a tree with constant connectivity. In a certain sense, the RRG ensemble can be seen as an infinite-dimensional (d →∞ ) cousin of the Anderson model in d dimensions. We focus on the delocalized side of the transition and stress the importance of finite-size effects. We show that the data can be interpreted in terms of the finite-size crossover from a small (N ≪Nc ) to a large (N ≫Nc ) system, where Nc is the correlation volume diverging exponentially at the transition. A distinct feature of this crossover is a nonmonotonicity of the spectral and wave-function statistics, which is related to properties of the critical phase in the studied model and renders the finite-size analysis highly nontrivial. Our results support an analytical prediction that states in the delocalized phase (and at N ≫Nc ) are ergodic in the sense that their inverse participation ratio scales as 1 /N .

  8. Random geometric graph description of connectedness percolation in rod systems.

    PubMed

    Chatterjee, Avik P; Grimaldi, Claudio

    2015-09-01

    The problem of continuum percolation in dispersions of rods is reformulated in terms of weighted random geometric graphs. Nodes (or sites or vertices) in the graph represent spatial locations occupied by the centers of the rods. The probability that an edge (or link) connects any randomly selected pair of nodes depends upon the rod volume fraction as well as the distribution over their sizes and shapes, and also upon quantities that characterize their state of dispersion (such as the orientational distribution function). We employ the observation that contributions from closed loops of connected rods are negligible in the limit of large aspect ratios to obtain percolation thresholds that are fully equivalent to those calculated within the second-virial approximation of the connectedness Ornstein-Zernike equation. Our formulation can account for effects due to interactions between the rods, and many-body features can be partially addressed by suitable choices for the edge probabilities.

  9. Random geometric graph description of connectedness percolation in rod systems

    NASA Astrophysics Data System (ADS)

    Chatterjee, Avik P.; Grimaldi, Claudio

    2015-09-01

    The problem of continuum percolation in dispersions of rods is reformulated in terms of weighted random geometric graphs. Nodes (or sites or vertices) in the graph represent spatial locations occupied by the centers of the rods. The probability that an edge (or link) connects any randomly selected pair of nodes depends upon the rod volume fraction as well as the distribution over their sizes and shapes, and also upon quantities that characterize their state of dispersion (such as the orientational distribution function). We employ the observation that contributions from closed loops of connected rods are negligible in the limit of large aspect ratios to obtain percolation thresholds that are fully equivalent to those calculated within the second-virial approximation of the connectedness Ornstein-Zernike equation. Our formulation can account for effects due to interactions between the rods, and many-body features can be partially addressed by suitable choices for the edge probabilities.

  10. Horizontal visibility graphs: exact results for random time series.

    PubMed

    Luque, B; Lacasa, L; Ballesteros, F; Luque, J

    2009-10-01

    The visibility algorithm has been recently introduced as a mapping between time series and complex networks. This procedure allows us to apply methods of complex network theory for characterizing time series. In this work we present the horizontal visibility algorithm, a geometrically simpler and analytically solvable version of our former algorithm, focusing on the mapping of random series (series of independent identically distributed random variables). After presenting some properties of the algorithm, we present exact results on the topological properties of graphs associated with random series, namely, the degree distribution, the clustering coefficient, and the mean path length. We show that the horizontal visibility algorithm stands as a simple method to discriminate randomness in time series since any random series maps to a graph with an exponential degree distribution of the shape P(k)=(1/3)(2/3)(k-2), independent of the probability distribution from which the series was generated. Accordingly, visibility graphs with other P(k) are related to nonrandom series. Numerical simulations confirm the accuracy of the theorems for finite series. In a second part, we show that the method is able to distinguish chaotic series from independent and identically distributed (i.i.d.) theory, studying the following situations: (i) noise-free low-dimensional chaotic series, (ii) low-dimensional noisy chaotic series, even in the presence of large amounts of noise, and (iii) high-dimensional chaotic series (coupled map lattice), without needs for additional techniques such as surrogate data or noise reduction methods. Finally, heuristic arguments are given to explain the topological properties of chaotic series, and several sequences that are conjectured to be random are analyzed.

  11. Measuring Edge Importance: A Quantitative Analysis of the Stochastic Shielding Approximation for Random Processes on Graphs

    PubMed Central

    2014-01-01

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin–Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán’s approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process. PMID:24742077

  12. Computational Graph Theoretical Model of the Zebrafish Sensorimotor Pathway

    NASA Astrophysics Data System (ADS)

    Peterson, Joshua M.; Stobb, Michael; Mazzag, Bori; Gahtan, Ethan

    2011-11-01

    Mapping the detailed connectivity patterns of neural circuits is a central goal of neuroscience and has been the focus of extensive current research [4, 3]. The best quantitative approach to analyze the acquired data is still unclear but graph theory has been used with success [3, 1]. We present a graph theoretical model with vertices and edges representing neurons and synaptic connections, respectively. Our system is the zebrafish posterior lateral line sensorimotor pathway. The goal of our analysis is to elucidate mechanisms of information processing in this neural pathway by comparing the mathematical properties of its graph to those of other, previously described graphs. We create a zebrafish model based on currently known anatomical data. The degree distributions and small-world measures of this model is compared to small-world, random and 3-compartment random graphs of the same size (with over 2500 nodes and 160,000 connections). We find that the zebrafish graph shows small-worldness similar to other neural networks and does not have a scale-free distribution of connections.

  13. Using Combinatorica/Mathematica for Student Projects in Random Graph Theory

    ERIC Educational Resources Information Center

    Pfaff, Thomas J.; Zaret, Michele

    2006-01-01

    We give an example of a student project that experimentally explores a topic in random graph theory. We use the "Combinatorica" package in "Mathematica" to estimate the minimum number of edges needed in a random graph to have a 50 percent chance that the graph is connected. We provide the "Mathematica" code and compare it to the known theoretical…

  14. Graph-Based Transform for 2D Piecewise Smooth Signals With Random Discontinuity Locations.

    PubMed

    Zhang, Dong; Liang, Jie

    2017-04-01

    The graph-based block transform recently emerged as an effective tool for compressing some special signals such as depth images in 3D videos. However, in existing methods, overheads are required to describe the graph of the block, from which the decoder has to calculate the transform via time-consuming eigendecomposition. To address these problems, in this paper, we aim to develop a single graph-based transform for a class of 2D piecewise smooth signals with similar discontinuity patterns. We first consider the deterministic case with a known discontinuity location in each row. We propose a 2D first-order autoregression (2D AR1) model and a 2D graph for this type of signals. We show that the closed-form expression of the inverse of a biased Laplacian matrix of the proposed 2D graph is exactly the covariance matrix of the proposed 2D AR1 model. Therefore, the optimal transform for the signal are the eigenvectors of the proposed graph Laplacian. Next, we show that similar results hold in the random case, where the locations of the discontinuities in different rows are randomly distributed within a confined region, and we derive the closed-form expression of the corresponding optimal 2D graph Laplacian. The theory developed in this paper can be used to design both pre-computed transforms and signal-dependent transforms with low complexities. Finally, depth image coding experiments demonstrate that our methods can achieve similar performance to the state-of-the-art method, but our complexity is much lower.

  15. Ensembles of physical states and random quantum circuits on graphs

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-11-01

    In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).

  16. Large Deviation Function for the Number of Eigenvalues of Sparse Random Graphs Inside an Interval

    NASA Astrophysics Data System (ADS)

    Metz, Fernando L.; Pérez Castillo, Isaac

    2016-09-01

    We present a general method to obtain the exact rate function Ψ[a ,b ](k ) controlling the large deviation probability Prob[IN[a ,b ]=k N ]≍e-N Ψ[a ,b ](k ) that an N ×N sparse random matrix has IN[a ,b ]=k N eigenvalues inside the interval [a ,b ]. The method is applied to study the eigenvalue statistics in two distinct examples: (i) the shifted index number of eigenvalues for an ensemble of Erdös-Rényi graphs and (ii) the number of eigenvalues within a bounded region of the spectrum for the Anderson model on regular random graphs. A salient feature of the rate function in both cases is that, unlike rotationally invariant random matrices, it is asymmetric with respect to its minimum. The asymmetric character depends on the disorder in a way that is compatible with the distinct eigenvalue statistics corresponding to localized and delocalized eigenstates. The results also show that the level compressibility κ2/κ1 for the Anderson model on a regular graph satisfies 0 <κ2/κ1<1 in the bulk regime, in contrast with the behavior found in Gaussian random matrices. Our theoretical findings are thoroughly compared to numerical diagonalization in both cases, showing a reasonable good agreement.

  17. Large Deviation Function for the Number of Eigenvalues of Sparse Random Graphs Inside an Interval.

    PubMed

    Metz, Fernando L; Pérez Castillo, Isaac

    2016-09-02

    We present a general method to obtain the exact rate function Ψ_{[a,b]}(k) controlling the large deviation probability Prob[I_{N}[a,b]=kN]≍e^{-NΨ_{[a,b]}(k)} that an N×N sparse random matrix has I_{N}[a,b]=kN eigenvalues inside the interval [a,b]. The method is applied to study the eigenvalue statistics in two distinct examples: (i) the shifted index number of eigenvalues for an ensemble of Erdös-Rényi graphs and (ii) the number of eigenvalues within a bounded region of the spectrum for the Anderson model on regular random graphs. A salient feature of the rate function in both cases is that, unlike rotationally invariant random matrices, it is asymmetric with respect to its minimum. The asymmetric character depends on the disorder in a way that is compatible with the distinct eigenvalue statistics corresponding to localized and delocalized eigenstates. The results also show that the level compressibility κ_{2}/κ_{1} for the Anderson model on a regular graph satisfies 0<κ_{2}/κ_{1}<1 in the bulk regime, in contrast with the behavior found in Gaussian random matrices. Our theoretical findings are thoroughly compared to numerical diagonalization in both cases, showing a reasonable good agreement.

  18. A conjecture on the maximum cut and bisection width in random regular graphs

    NASA Astrophysics Data System (ADS)

    Zdeborová, Lenka; Boettcher, Stefan

    2010-02-01

    The asymptotic properties of random regular graphs are objects of extensive study in mathematics and physics. In this paper we argue, using the theory of spin glasses in physics, that in random regular graphs the maximum cut size asymptotically equals the number of edges in the graph minus the minimum bisection size. Maximum cut and minimal bisection are two famous NP-complete problems with no known general relation between them; hence our conjecture is a surprising property for random regular graphs. We further support the conjecture with numerical simulations. A rigorous proof of this relation is an obvious challenge.

  19. Evolution of tag-based cooperation on Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.; Hadzibeganovic, Tarik; Stauffer, Dietrich

    2014-12-01

    Here, we study an agent-based model of the evolution of tag-mediated cooperation on Erdős-Rényi random graphs. In our model, agents with heritable phenotypic traits play pairwise Prisoner's Dilemma-like games and follow one of the four possible strategies: Ethnocentric, altruistic, egoistic and cosmopolitan. Ethnocentric and cosmopolitan strategies are conditional, i.e. their selection depends upon the shared phenotypic similarity among interacting agents. The remaining two strategies are always unconditional, meaning that egoists always defect while altruists always cooperate. Our simulations revealed that ethnocentrism can win in both early and later evolutionary stages on directed random graphs when reproduction of artificial agents was asexual; however, under the sexual mode of reproduction on a directed random graph, we found that altruists dominate initially for a rather short period of time, whereas ethnocentrics and egoists suppress other strategists and compete for dominance in the intermediate and later evolutionary stages. Among our results, we also find surprisingly regular oscillations which are not damped in the course of time even after half a million Monte Carlo steps. Unlike most previous studies, our findings highlight conditions under which ethnocentrism is less stable or suppressed by other competing strategies.

  20. The XXZ Heisenberg model on random surfaces

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Sedrakyan, A.

    2013-09-01

    We consider integrable models, or in general any model defined by an R-matrix, on random surfaces, which are discretized using random Manhattan lattices. The set of random Manhattan lattices is defined as the set dual to the lattice random surfaces embedded on a regular d-dimensional lattice. They can also be associated with the random graphs of multiparticle scattering nodes. As an example we formulate a random matrix model where the partition function reproduces the annealed average of the XXZ Heisenberg model over all random Manhattan lattices. A technique is presented which reduces the random matrix integration in partition function to an integration over their eigenvalues.

  1. Modeling Transmission Line Networks Using Quantum Graphs

    NASA Astrophysics Data System (ADS)

    Koch, Trystan; Antonsen, Thomas

    Quantum graphs--one dimensional edges, connecting nodes, that support propagating Schrödinger wavefunctions--have been studied extensively as tractable models of wave chaotic behavior (Smilansky and Gnutzmann 2006, Berkolaiko and Kuchment 2013). Here we consider the electrical analog, in which the graph represents an electrical network where the edges are transmission lines (Hul et. al. 2004) and the nodes contain either discrete circuit elements or intricate circuit elements best represented by arbitrary scattering matrices. Including these extra degrees of freedom at the nodes leads to phenomena that do not arise in simpler graph models. We investigate the properties of eigenfrequencies and eigenfunctions on these graphs, and relate these to the statistical description of voltages on the transmission lines when driving the network externally. The study of electromagnetic compatibility, the effect of external radiation on complicated systems with numerous interconnected cables, motivates our research into this extension of the graph model. Work supported by the Office of Naval Research (N0014130474) and the Air Force Office of Scientific Research.

  2. A weak zero-one law for sequences of random distance graphs

    SciTech Connect

    Zhukovskii, Maksim E

    2012-07-31

    We study zero-one laws for properties of random distance graphs. Properties written in a first-order language are considered. For p(N) such that pN{sup {alpha}}{yields}{infinity} as N{yields}{infinity}, and (1-p)N{sup {alpha}} {yields} {infinity} as N {yields} {infinity} for any {alpha}>0, we succeed in refuting the law. In this connection, we consider a weak zero-one j-law. For this law, we obtain results for random distance graphs which are similar to the assertions concerning the classical zero-one law for random graphs. Bibliography: 18 titles.

  3. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGES

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  4. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    SciTech Connect

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.

  5. Neural Population Dynamics Modeled by Mean-Field Graphs

    NASA Astrophysics Data System (ADS)

    Kozma, Robert; Puljic, Marko

    2011-09-01

    In this work we apply random graph theory approach to describe neural population dynamics. There are important advantages of using random graph theory approach in addition to ordinary and partial differential equations. The mathematical theory of large-scale random graphs provides an efficient tool to describe transitions between high- and low-dimensional spaces. Recent advances in studying neural correlates of higher cognition indicate the significance of sudden changes in space-time neurodynamics, which can be efficiently described as phase transitions in the neuropil medium. Phase transitions are rigorously defined mathematically on random graph sequences and they can be naturally generalized to a class of percolation processes called neuropercolation. In this work we employ mean-field graphs with given vertex degree distribution and edge strength distribution. We demonstrate the emergence of collective oscillations in the style of brains.

  6. Identifying subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm.

    PubMed

    Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Chen, Chao; Xie, Yun; Dai, Zong; Zou, Xiao-Yong

    2013-04-05

    In the post-genome era, one of the most important and challenging tasks is to identify the subcellular localizations of protein complexes, and further elucidate their functions in human health with applications to understand disease mechanisms, diagnosis and therapy. Although various experimental approaches have been developed and employed to identify the subcellular localizations of protein complexes, the laboratory technologies fall far behind the rapid accumulation of protein complexes. Therefore, it is highly desirable to develop a computational method to rapidly and reliably identify the subcellular localizations of protein complexes. In this study, a novel method is proposed for predicting subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm. Protein complexes are modeled as weighted graphs containing nodes and edges, where nodes represent proteins, edges represent protein-protein interactions and weights are descriptors of protein primary structures. Some topological structure features are proposed and adopted to characterize protein complexes based on graph theory. Random forest is employed to construct a model and predict subcellular localizations of protein complexes. Accuracies on a training set by a 10-fold cross-validation test for predicting plasma membrane/membrane attached, cytoplasm and nucleus are 84.78%, 71.30%, and 82.00%, respectively. And accuracies for the independent test set are 81.31%, 69.95% and 81.00%, respectively. These high prediction accuracies exhibit the state-of-the-art performance of the current method. It is anticipated that the proposed method may become a useful high-throughput tool and plays a complementary role to the existing experimental techniques in identifying subcellular localizations of mammalian protein complexes. The source code of Matlab and the dataset can be obtained freely on request from the authors.

  7. Degree distributions of the visibility graphs mapped from fractional Brownian motions and multifractal random walks

    NASA Astrophysics Data System (ADS)

    Ni, Xiao-Hui; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2009-10-01

    The dynamics of a complex system is usually recorded in the form of time series, which can be studied through its visibility graph from a complex network perspective. We investigate the visibility graphs extracted from fractional Brownian motions and multifractal random walks, and find that the degree distributions exhibit power-law behaviors, in which the power-law exponent α is a linear function of the Hurst index H of the time series. We also find that the degree distribution of the visibility graph is mainly determined by the temporal correlation of the original time series with minor influence from the possible multifractal nature. As an example, we study the visibility graphs constructed from three Chinese stock market indexes and unveil that the degree distributions have power-law tails, where the tail exponents of the visibility graphs and the Hurst indexes of the indexes are close to the α∼H linear relationship.

  8. Bonabeau model on a fully connected graph

    NASA Astrophysics Data System (ADS)

    Malarz, K.; Stauffer, D.; Kułakowski, K.

    2006-03-01

    Numerical simulations are reported on the Bonabeau model on a fully connected graph, where spatial degrees of freedom are absent. The control parameter is the memory factor f. The phase transition is observed at the dispersion of the agents power hi. The critical value fC shows a hysteretic behavior with respect to the initial distribution of hi. fC decreases with the system size; this decrease can be compensated by a greater number of fights between a global reduction of the distribution width of hi. The latter step is equivalent to a partial forgetting.

  9. Graph Partitioning Models for Parallel Computing

    SciTech Connect

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  10. Fragmentation properties of two-dimensional proximity graphs considering random failures and targeted attacks

    NASA Astrophysics Data System (ADS)

    Norrenbrock, C.; Melchert, O.; Hartmann, A. K.

    2016-12-01

    The pivotal quality of proximity graphs is connectivity, i.e., all nodes in the graph are connected to one another either directly or via intermediate nodes. These types of graphs are often robust, i.e., they are able to function well even if they are subject to limited removal of elementary building blocks, as may occur for random failures or targeted attacks. Here, we study how the structure of these graphs is affected when nodes get removed successively until an extensive fraction is removed such that the graphs fragment. We study different types of proximity graphs for various node-removal strategies. We use different types of observables to monitor the fragmentation process, simple ones like the number and sizes of connected components and more complex ones like the hop diameter and the backup capacity, which is needed to make a network N -1 resilient. The actual fragmentation turns out to be described by a second-order phase transition. Using finite-size scaling analyses we numerically assess the threshold fraction of removed nodes, which is characteristic for the particular graph type and node deletion scheme; this suffices to decompose the underlying graphs.

  11. Random Forest classification based on star graph topological indices for antioxidant proteins.

    PubMed

    Fernández-Blanco, Enrique; Aguiar-Pulido, Vanessa; Munteanu, Cristian Robert; Dorado, Julian

    2013-01-21

    Aging and life quality is an important research topic nowadays in areas such as life sciences, chemistry, pharmacology, etc. People live longer, and, thus, they want to spend that extra time with a better quality of life. At this regard, there exists a tiny subset of molecules in nature, named antioxidant proteins that may influence the aging process. However, testing every single protein in order to identify its properties is quite expensive and inefficient. For this reason, this work proposes a model, in which the primary structure of the protein is represented using complex network graphs that can be used to reduce the number of proteins to be tested for antioxidant biological activity. The graph obtained as a representation will help us describe the complex system by using topological indices. More specifically, in this work, Randić's Star Networks have been used as well as the associated indices, calculated with the S2SNet tool. In order to simulate the existing proportion of antioxidant proteins in nature, a dataset containing 1999 proteins, of which 324 are antioxidant proteins, was created. Using this data as input, Star Graph Topological Indices were calculated with the S2SNet tool. These indices were then used as input to several classification techniques. Among the techniques utilised, the Random Forest has shown the best performance, achieving a score of 94% correctly classified instances. Although the target class (antioxidant proteins) represents a tiny subset inside the dataset, the proposed model is able to achieve a percentage of 81.8% correctly classified instances for this class, with a precision of 81.3%.

  12. Simple graph models of information spread in finite populations

    PubMed Central

    Voorhees, Burton; Ryder, Bergerud

    2015-01-01

    We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661

  13. Complete graph model for community detection

    NASA Astrophysics Data System (ADS)

    Sun, Peng Gang; Sun, Xiya

    2017-04-01

    Community detection brings plenty of considerable problems, which has attracted more attention for many years. This paper develops a new framework, which tries to measure the interior and the exterior of a community based on a same metric, complete graph model. In particular, the exterior is modeled as a complete bipartite. We partition a network into subnetworks by maximizing the difference between the interior and the exterior of the subnetworks. In addition, we compare our approach with some state of the art methods on computer-generated networks based on the LFR benchmark as well as real-world networks. The experimental results indicate that our approach obtains better results for community detection, is capable of splitting irregular networks and achieves perfect results on the karate network and the dolphin network.

  14. 3D Mesh Segmentation Based on Markov Random Fields and Graph Cuts

    NASA Astrophysics Data System (ADS)

    Shi, Zhenfeng; Le, Dan; Yu, Liyang; Niu, Xiamu

    3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.

  15. A formal definition of data flow graph models

    NASA Technical Reports Server (NTRS)

    Kavi, Krishna M.; Buckles, Bill P.; Bhat, U. Narayan

    1986-01-01

    In this paper, a new model for parallel computations and parallel computer systems that is based on data flow principles is presented. Uninterpreted data flow graphs can be used to model computer systems including data driven and parallel processors. A data flow graph is defined to be a bipartite graph with actors and links as the two vertex classes. Actors can be considered similar to transitions in Petri nets, and links similar to places. The nondeterministic nature of uninterpreted data flow graphs necessitates the derivation of liveness conditions.

  16. Reducing Redundancies in Reconfigurable Antenna Structures Using Graph Models

    SciTech Connect

    Costantine, Joseph; al-Saffar, Sinan; Christodoulou, Christos G.; Abdallah, Chaouki T.

    2010-04-23

    Many reconfigurable antennas have redundant components in their structures. In this paper we present an approach for reducing redundancies in reconfigurable antenna structures using graph models. We study reconfigurable antennas, which are grouped, categorized and modeled according to a set of proposed graph rules. Several examples are presented and discussed to demonstrate the validity of this new technique.

  17. Robust Spectral Clustering Using Statistical Sub-Graph Affinity Model

    PubMed Central

    Eichel, Justin A.; Wong, Alexander; Fieguth, Paul; Clausi, David A.

    2013-01-01

    Spectral clustering methods have been shown to be effective for image segmentation. Unfortunately, the presence of image noise as well as textural characteristics can have a significant negative effect on the segmentation performance. To accommodate for image noise and textural characteristics, this study introduces the concept of sub-graph affinity, where each node in the primary graph is modeled as a sub-graph characterizing the neighborhood surrounding the node. The statistical sub-graph affinity matrix is then constructed based on the statistical relationships between sub-graphs of connected nodes in the primary graph, thus counteracting the uncertainty associated with the image noise and textural characteristics by utilizing more information than traditional spectral clustering methods. Experiments using both synthetic and natural images under various levels of noise contamination demonstrate that the proposed approach can achieve improved segmentation performance when compared to existing spectral clustering methods. PMID:24386111

  18. Phase Transitions for the Cavity Approach to the Clique Problem on Random Graphs

    NASA Astrophysics Data System (ADS)

    Gaudillière, Alexandre; Scoppola, Benedetto; Scoppola, Elisabetta; Viale, Massimiliano

    2011-12-01

    We give a rigorous proof of two phase transitions for a disordered statistical mechanics system used to define an algorithm to find large cliques inside Erdös random graphs. Such a system is a conservative probabilistic cellular automaton inspired by the cavity method originally introduced in spin glass theory.

  19. Absolutely continuous spectrum implies ballistic transport for quantum particles in a random potential on tree graphs

    NASA Astrophysics Data System (ADS)

    Aizenman, Michael; Warzel, Simone

    2012-09-01

    We discuss the dynamical implications of the recent proof that for a quantum particle in a random potential on a regular tree graph absolutely continuous (ac) spectrum occurs non-perturbatively through rare fluctuation-enabled resonances. The main result is spelled in the title.

  20. Absolutely continuous spectrum implies ballistic transport for quantum particles in a random potential on tree graphs

    SciTech Connect

    Aizenman, Michael; Warzel, Simone

    2012-09-15

    We discuss the dynamical implications of the recent proof that for a quantum particle in a random potential on a regular tree graph absolutely continuous (ac) spectrum occurs non-perturbatively through rare fluctuation-enabled resonances. The main result is spelled in the title.

  1. Bounding the Edge Cover Time of Random Walks on Graphs

    DTIC Science & Technology

    2011-07-21

    34. The Annals of Probability, Vol 16, No. 1, pp. 189-199, 1988. [21] Niels Erik N6rlund. Vorlesungen Uber Diffcrcnzenrechnung. New York, Chelsea, 1954...16, No. 1, pp. 189-199, 1988. [21] Niels Erik N6rlund. Voriesungen Uber Differenzenrcchnung. New York, Chelsea, 1954. [22] Prasad Tetali. "Random

  2. Using graph approach for managing connectivity in integrative landscape modelling

    NASA Astrophysics Data System (ADS)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open

  3. Emergence of the giant weak component in directed random graphs with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Kryven, Ivan

    2016-07-01

    The weak component generalizes the idea of connected components to directed graphs. In this paper, an exact criterion for the existence of the giant weak component is derived for directed graphs with arbitrary bivariate degree distributions. In addition, we consider a random process for evolving directed graphs with bounded degrees. The bounds are not the same for different vertices but satisfy a predefined distribution. The analytic expression obtained for the evolving degree distribution is then combined with the weak-component criterion to obtain the exact time of the phase transition. The phase-transition time is obtained as a function of the distribution that bounds the degrees. Remarkably, when viewed from the step-polymerization formalism, the new results yield Flory-Stockmayer gelation theory and generalize it to a broader scope.

  4. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  5. Monadic structures over an ordered universal random graph and finite automata

    NASA Astrophysics Data System (ADS)

    Dudakov, Sergey M.

    2011-10-01

    We continue the investigation of the expressive power of the language of predicate logic for finite algebraic systems embedded in infinite systems. This investigation stems from papers of M. A. Taitslin, M. Benedikt and L. Libkin, among others. We study the properties of a finite monadic system which can be expressed by formulae if such a system is embedded in a random graph that is totally ordered in an arbitrary way. The Büchi representation is used to connect monadic structures and formal languages. It is shown that, if one restricts attention to formulae that are -invariant in totally ordered random graphs, then these formulae correspond to finite automata. We show that =-invariant formulae expressing the properties of the embedded system itself can express only Boolean combinations of properties of the form `the cardinality of an intersection of one-place predicates belongs to one of finitely many fixed finite or infinite arithmetic progressions'.

  6. Voter model on the two-clique graph

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki

    2014-07-01

    I examine the mean consensus time (i.e., exit time) of the voter model in the so-called two-clique graph. The two-clique graph is composed of two cliques interconnected by some links and considered as a toy model of networks with community structure or multilayer networks. I analytically show that, as the number of interclique links per node is varied, the mean consensus time experiences a crossover between a fast consensus regime [i.e., O (N)] and a slow consensus regime [i.e., O (N2)], where N is the number of nodes. The fast regime is consistent with the result for homogeneous well-mixed graphs such as the complete graph. The slow regime appears only when the entire network has O (1) interclique links. The present results suggest that the effect of community structure on the consensus time of the voter model is fairly limited.

  7. Voter model on the two-clique graph.

    PubMed

    Masuda, Naoki

    2014-07-01

    I examine the mean consensus time (i.e., exit time) of the voter model in the so-called two-clique graph. The two-clique graph is composed of two cliques interconnected by some links and considered as a toy model of networks with community structure or multilayer networks. I analytically show that, as the number of interclique links per node is varied, the mean consensus time experiences a crossover between a fast consensus regime [i.e., O(N)] and a slow consensus regime [i.e., O(N(2))], where N is the number of nodes. The fast regime is consistent with the result for homogeneous well-mixed graphs such as the complete graph. The slow regime appears only when the entire network has O(1) interclique links. The present results suggest that the effect of community structure on the consensus time of the voter model is fairly limited.

  8. The Replica Symmetric Solution for Potts Models on d-Regular Graphs

    NASA Astrophysics Data System (ADS)

    Dembo, Amir; Montanari, Andrea; Sly, Allan; Sun, Nike

    2014-04-01

    We establish an explicit formula for the limiting free energy density (log-partition function divided by the number of vertices) for ferromagnetic Potts models on uniformly sparse graph sequences converging locally to the d-regular tree for d even, covering all temperature regimes. This formula coincides with the Bethe free energy functional evaluated at a suitable fixed point of the belief propagation recursion on the d-regular tree, the so-called replica symmetric solution. For uniformly random d-regular graphs we further show that the replica symmetric Bethe formula is an upper bound for the asymptotic free energy for any model with permissive interactions.

  9. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images

    NASA Astrophysics Data System (ADS)

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of JPEG compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed DCT coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors---Laplacian prior for DCT coefficients, sparsity prior and graph-signal smoothness prior for image patches---to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error (MMSE) initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the over-complete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared to previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth (PWS) signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  10. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  11. A graph theory practice on transformed image: a random image steganography.

    PubMed

    Thanikaiselvan, V; Arulmozhivarman, P; Subashanthini, S; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients.

  12. A Graph Theory Practice on Transformed Image: A Random Image Steganography

    PubMed Central

    Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857

  13. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing

    PubMed Central

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102

  14. A graph-theoretic approach to modeling metabolic pathways

    NASA Astrophysics Data System (ADS)

    Gifford, Eric; Johnson, Mark; Tsai, Chun-che

    1991-08-01

    The metabolic pathways of medazepam, oxazepam, and diazepam were modeled using graph-theoretic transforms which are incorporable into computer-assisted metabolic analysis programs. The information, represented in the form of a graph-theoretic transform kit, which was obtained from these pathways was then used to predict the metabolites of other benzodiazepine compounds. The transform kits gave statistically significant predictions with respect to a statistical method for evaluating the performance of the transform kits.

  15. Termination of Multipartite Graph Series Arising from Complex Network Modelling

    NASA Astrophysics Data System (ADS)

    Latapy, Matthieu; Phan, Thi Ha Duong; Crespelle, Christophe; Nguyen, Thanh Qui

    An intense activity is nowadays devoted to the definition of models capturing the properties of complex networks. Among the most promising approaches, it has been proposed to model these graphs via their clique incidence bipartite graphs. However, this approach has, until now, severe limitations resulting from its incapacity to reproduce a key property of this object: the overlapping nature of cliques in complex networks. In order to get rid of these limitations we propose to encode the structure of clique overlaps in a network thanks to a process consisting in iteratively factorising the maximal bicliques between the upper level and the other levels of a multipartite graph. We show that the most natural definition of this factorising process leads to infinite series for some instances. Our main result is to design a restriction of this process that terminates for any arbitrary graph. Moreover, we show that the resulting multipartite graph has remarkable combinatorial properties and is closely related to another fundamental combinatorial object. Finally, we show that, in practice, this multipartite graph is computationally tractable and has a size that makes it suitable for complex network modelling.

  16. Exact epidemic models on graphs using graph-automorphism driven lumping.

    PubMed

    Simon, Péter L; Taylor, Michael; Kiss, Istvan Z

    2011-04-01

    The dynamics of disease transmission strongly depends on the properties of the population contact network. Pair-approximation models and individual-based network simulation have been used extensively to model contact networks with non-trivial properties. In this paper, using a continuous time Markov chain, we start from the exact formulation of a simple epidemic model on an arbitrary contact network and rigorously derive and prove some known results that were previously mainly justified based on some biological hypotheses. The main result of the paper is the illustration of the link between graph automorphisms and the process of lumping whereby the number of equations in a system of linear differential equations can be significantly reduced. The main advantage of lumping is that the simplified lumped system is not an approximation of the original system but rather an exact version of this. For a special class of graphs, we show how the lumped system can be obtained by using graph automorphisms. Finally, we discuss the advantages and possible applications of exact epidemic models and lumping.

  17. Dynamic modeling of electrochemical systems using linear graph theory

    NASA Astrophysics Data System (ADS)

    Dao, Thanh-Son; McPhee, John

    An electrochemical cell is a multidisciplinary system which involves complex chemical, electrical, and thermodynamical processes. The primary objective of this paper is to develop a linear graph-theoretical modeling for the dynamic description of electrochemical systems through the representation of the system topologies. After a brief introduction to the topic and a review of linear graphs, an approach to develop linear graphs for electrochemical systems using a circuitry representation is discussed, followed in turn by the use of the branch and chord transformation techniques to generate final dynamic equations governing the system. As an example, the application of linear graph theory to modeling a nickel metal hydride (NiMH) battery will be presented. Results show that not only the number of equations are reduced significantly, but also the linear graph model simulates faster compared to the original lumped parameter model. The approach presented in this paper can be extended to modeling complex systems such as an electric or hybrid electric vehicle where a battery pack is interconnected with other components in many different domains.

  18. A maxent-stress model for graph layout.

    PubMed

    Gansner, Emden R; Hu, Yifan; North, Stephen

    2013-06-01

    In some applications of graph visualization, input edges have associated target lengths. Dealing with these lengths is a challenge, especially for large graphs. Stress models are often employed in this situation. However, the traditional full stress model is not scalable due to its reliance on an initial all-pairs shortest path calculation. A number of fast approximation algorithms have been proposed. While they work well for some graphs, the results are less satisfactory on graphs of intrinsically high dimension, because some nodes may be placed too close together, or even share the same position. We propose a solution, called the maxent-stress model, which applies the principle of maximum entropy to cope with the extra degrees of freedom. We describe a force-augmented stress majorization algorithm that solves the maxent-stress model. Numerical results show that the algorithm scales well, and provides acceptable layouts for large, nonrigid graphs. This also has potential applications to scalable algorithms for statistical multidimensional scaling (MDS) with variable distances.

  19. O( N) Random Tensor Models

    NASA Astrophysics Data System (ADS)

    Carrozza, Sylvain; Tanasa, Adrian

    2016-11-01

    We define in this paper a class of three-index tensor models, endowed with {O(N)^{⊗ 3}} invariance ( N being the size of the tensor). This allows to generate, via the usual QFT perturbative expansion, a class of Feynman tensor graphs which is strictly larger than the class of Feynman graphs of both the multi-orientable model (and hence of the colored model) and the U( N) invariant models. We first exhibit the existence of a large N expansion for such a model with general interactions. We then focus on the quartic model and we identify the leading and next-to-leading order (NLO) graphs of the large N expansion. Finally, we prove the existence of a critical regime and we compute the critical exponents, both at leading order and at NLO. This is achieved through the use of various analytic combinatorics techniques.

  20. Independence numbers and chromatic numbers of the random subgraphs of some distance graphs

    NASA Astrophysics Data System (ADS)

    Bogolubsky, L. I.; Gusev, A. S.; Pyaderkin, M. M.; Raigorodskii, A. M.

    2015-10-01

    This work is concerned with the Nelson-Hadwiger classical problem of finding the chromatic numbers of distance graphs in ℝn. Most consideration is given to the class of graphs G(n, r, s)= (V(n, r), E(n, r, s)) defined as follows: \\displaystyle V(n, r)=\\bigl\\{\\mathbf{x}=(x_1,\\dots,x_n) : x_i\\in\\{0, 1\\}, x_1+\\dots+x_n=r\\bigr\\},//\\displaystyle E(n, r, s)=\\bigl\\{\\{\\mathbf{x}, \\mathbf{y}: (\\mathbf{x}, \\mathbf{y})=s\\}\\bigr\\}, where (x, y) is the Euclidean inner product. In particular, the chromatic number of G(n, 3, 1) was recently found by Balogh, Kostochka and Raigorodskii. The objects of the study are the random subgraphs 𝒢(G(n, r, s), p) whose edges are independently taken from the set E(n, r, s), each with probability p. The independence number and the chromatic number of such graphs are estimated both from below and from above. In the case when r - s is a prime power and r <= 2s + 1, the order of growth of α(𝒢(G(n, r, s), p)) and χ(𝒢(G(n, r, s), p)) is established. Bibliography: 51 titles.

  1. Graph and Network for Model Elicitation (GNOME Phase 2)

    DTIC Science & Technology

    2013-02-01

    Figure 1: GNOME Overview 3.1 Framework The models and model data for NOEM is stored using Ptolemy , which offers a strong actor-oriented...Overall, the structure of actors within a particular Ptolemy model allows the model to be treated as both a graph and a simulation. The data...structure developed through GNOME represents the graphical aspects of the Ptolemy model in a more interoperable manner with additional analysis

  2. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  3. GraphCrunch 2: Software tool for network modeling, alignment and clustering

    PubMed Central

    2011-01-01

    Background Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. Results We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, Graph

  4. Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images.

    PubMed

    Ju, Wei; Xiang, Dehui; Xiang, Deihui; Zhang, Bin; Wang, Lirong; Kopriva, Ivica; Chen, Xinjian

    2015-12-01

    Accurate lung tumor delineation plays an important role in radiotherapy treatment planning. Since the lung tumor has poor boundary in positron emission tomography (PET) images and low contrast in computed tomography (CT) images, segmentation of tumor in the PET and CT images is a challenging task. In this paper, we effectively integrate the two modalities by making fully use of the superior contrast of PET images and superior spatial resolution of CT images. Random walk and graph cut method is integrated to solve the segmentation problem, in which random walk is utilized as an initialization tool to provide object seeds for graph cut segmentation on the PET and CT images. The co-segmentation problem is formulated as an energy minimization problem which is solved by max-flow/min-cut method. A graph, including two sub-graphs and a special link, is constructed, in which one sub-graph is for the PET and another is for CT, and the special link encodes a context term which penalizes the difference of the tumor segmentation on the two modalities. To fully utilize the characteristics of PET and CT images, a novel energy representation is devised. For the PET, a downhill cost and a 3D derivative cost are proposed. For the CT, a shape penalty cost is integrated into the energy function which helps to constrain the tumor region during the segmentation. We validate our algorithm on a data set which consists of 18 PET-CT images. The experimental results indicate that the proposed method is superior to the graph cut method solely using the PET or CT is more accurate compared with the random walk method, random walk co-segmentation method, and non-improved graph cut method.

  5. Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen E.; Humble, Travis S.

    2017-04-01

    Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. In an effort to reduce the complexity of the minor embedding problem, we introduce the minor set cover (MSC) of a known graph G: a subset of graph minors which contain any remaining minor of the graph as a subgraph. Any graph that can be embedded into G will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, which is a complete bipartite graph. We show that the complete bipartite graph K_{N,N} has a MSC of N minors, from which K_{N+1} is identified as the largest clique minor of K_{N,N}. The case of determining the largest clique minor of hardware with faults is briefly discussed but remains an open question.

  6. Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach

    NASA Technical Reports Server (NTRS)

    Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.

    2006-01-01

    A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.

  7. Graph's Topology and Free Energy of a Spin Model on the Graph

    NASA Astrophysics Data System (ADS)

    Choi, Jeong-Mo; Gilson, Amy I.; Shakhnovich, Eugene I.

    2017-02-01

    In this Letter we investigate a direct relationship between a graph's topology and the free energy of a spin system on the graph. We develop a method of separating topological and energetic contributions to the free energy, and find that considering the topology is sufficient to qualitatively compare the free energies of different graph systems at high temperature, even when the energetics are not fully known. This method was applied to the metal lattice system with defects, and we found that it partially explains why point defects are more stable than high-dimensional defects. Given the energetics, we can even quantitatively compare free energies of different graph structures via a closed form of linear graph contributions. The closed form is applied to predict the sequence-space free energy of lattice proteins, which is a key factor determining the designability of a protein structure.

  8. A novel 3D graph cut based co-segmentation of lung tumor on PET-CT images with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Yu, Kai; Chen, Xinjian; Shi, Fei; Zhu, Weifang; Zhang, Bin; Xiang, Dehui

    2016-03-01

    Positron Emission Tomography (PET) and Computed Tomography (CT) have been widely used in clinical practice for radiation therapy. Most existing methods only used one image modality, either PET or CT, which suffers from the low spatial resolution in PET or low contrast in CT. In this paper, a novel 3D graph cut method is proposed, which integrated Gaussian Mixture Models (GMMs) into the graph cut method. We also employed the random walk method as an initialization step to provide object seeds for the improvement of the graph cut based segmentation on PET and CT images. The constructed graph consists of two sub-graphs and a special link between the sub-graphs which penalize the difference segmentation between the two modalities. Finally, the segmentation problem is solved by the max-flow/min-cut method. The proposed method was tested on 20 patients' PET-CT images, and the experimental results demonstrated the accuracy and efficiency of the proposed algorithm.

  9. Departure of some parameter-dependent spectral statistics of irregular quantum graphs from random matrix theory predictions.

    PubMed

    Hul, Oleh; Seba, Petr; Sirko, Leszek

    2009-06-01

    Parameter-dependent statistical properties of spectra of totally connected irregular quantum graphs with Neumann boundary conditions are studied. The autocorrelation functions of level velocities c(x) and c[over ](omega,x) as well as the distributions of level curvatures and avoided crossing gaps are calculated. The numerical results are compared with the predictions of random matrix theory for Gaussian orthogonal ensemble (GOE) and for coupled GOE matrices. The application of coupled GOE matrices was justified by studying localization phenomena in graphs' wave functions Psi(x) using the inverse participation ratio and the amplitude distribution P(Psi(x)) .

  10. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  11. Some generalisations of linear-graph modelling for dynamic systems

    NASA Astrophysics Data System (ADS)

    de Silva, Clarence W.; Pourazadi, Shahram

    2013-11-01

    Proper modelling of a dynamic system can benefit analysis, simulation, design, evaluation and control of the system. The linear-graph (LG) approach is suitable for modelling lumped-parameter dynamic systems. By using the concepts of graph trees, it provides a graphical representation of the system, with a direct correspondence to the physical component topology. This paper systematically extends the application of LGs to multi-domain (mixed-domain or multi-physics) dynamic systems by presenting a unified way to represent different domains - mechanical, electrical, thermal and fluid. Preservation of the structural correspondence across domains is a particular advantage of LGs when modelling mixed-domain systems. The generalisation of Thevenin and Norton equivalent circuits to mixed-domain systems, using LGs, is presented. The structure of an LG model may follow a specific pattern. Vector LGs are introduced to take advantage of such patterns, giving a general LG representation for them. Through these vector LGs, the model representation becomes simpler and rather compact, both topologically and parametrically. A new single LG element is defined to facilitate the modelling of distributed-parameter (DP) systems. Examples are presented using multi-domain systems (a motion-control system and a flow-controlled pump), a multi-body mechanical system (robot manipulator) and DP systems (structural rods) to illustrate the application and advantages of the methodologies developed in the paper.

  12. Model predictive control of P-time event graphs

    NASA Astrophysics Data System (ADS)

    Hamri, H.; Kara, R.; Amari, S.

    2016-12-01

    This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.

  13. Spectral correlations of individual quantum graphs

    SciTech Connect

    Gnutzmann, Sven; Altland, Alexander

    2005-11-01

    We investigate the spectral properties of chaotic quantum graphs. We demonstrate that the energy-average over the spectrum of individual graphs can be traded for the functional average over a supersymmetric nonlinear {sigma}-model action. This proves that spectral correlations of individual quantum graphs behave according to the predictions of Wigner-Dyson random matrix theory. We explore the stability of the universal random matrix behavior with regard to perturbations, and discuss the crossover between different types of symmetries.

  14. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    SciTech Connect

    Chair, Noureddine

    2012-12-15

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: Black-Right-Pointing-Pointer We obtain exact formulas for the two-point resistance of the complete graph minus N edges. Black-Right-Pointing-Pointer We obtain also the total effective resistance of this graph. Black-Right-Pointing-Pointer We modified Schwatt's formula on trigonometrical power sum to suit our computations. Black-Right-Pointing-Pointer We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. Black-Right-Pointing-Pointer The first passage and mean first passage times of the random walks have exact expressions.

  15. Scientist-Centered Graph-Based Models of Scientific Knowledge

    SciTech Connect

    Chin, George; Stephan, Eric G.; Gracio, Deborah K.; Kuchar, Olga A.; Whitney, Paul D.; Schuchardt, Karen L.

    2005-07-01

    At the Pacific Northwest National Laboratory, we are researching and developing visual models and paradigms that will allow scientists to capture and represent conceptual models in a computational form that may linked to and integrated with scientific data sets and applications. Captured conceptual models may be logical in conveying how individual concepts tie together to form a higher theory, analytical in conveying intermediate or final analysis results, or temporal in describing the experimental process in which concepts are physically and computationally explored. In this paper, we describe and contrast three different research and development systems that allow scientists to capture and interact with computational graph-based models of scientific knowledge. Through these examples, we explore and examine ways in which researchers may graphically encode and apply scientific theory and practice on computer systems.

  16. A new graph model and algorithms for consistent superstring problems.

    PubMed

    Na, Joong Chae; Cho, Sukhyeun; Choi, Siwon; Kim, Jin Wook; Park, Kunsoo; Sim, Jeong Seop

    2014-05-28

    Problems related to string inclusion and non-inclusion have been vigorously studied in diverse fields such as data compression, molecular biology and computer security. Given a finite set of positive strings P and a finite set of negative strings N, a string α is a consistent superstring if every positive string is a substring of α and no negative string is a substring of α. The shortest (resp. longest) consistent superstring problem is to find a string α that is the shortest (resp. longest) among all the consistent superstrings for the given sets of strings. In this paper, we first propose a new graph model for consistent superstrings for given P and N. In our graph model, the set of strings represented by paths satisfying some conditions is the same as the set of consistent superstrings for P and N. We also present algorithms for the shortest and the longest consistent superstring problems. Our algorithms solve the consistent superstring problems for all cases, including cases that are not considered in previous work. Moreover, our algorithms solve in polynomial time the consistent superstring problems for more cases than the previous algorithms. For the polynomially solvable cases, our algorithms are more efficient than the previous ones.

  17. A new graph model and algorithms for consistent superstring problems†

    PubMed Central

    Na, Joong Chae; Cho, Sukhyeun; Choi, Siwon; Kim, Jin Wook; Park, Kunsoo; Sim, Jeong Seop

    2014-01-01

    Problems related to string inclusion and non-inclusion have been vigorously studied in diverse fields such as data compression, molecular biology and computer security. Given a finite set of positive strings and a finite set of negative strings , a string α is a consistent superstring if every positive string is a substring of α and no negative string is a substring of α. The shortest (resp. longest) consistent superstring problem is to find a string α that is the shortest (resp. longest) among all the consistent superstrings for the given sets of strings. In this paper, we first propose a new graph model for consistent superstrings for given and . In our graph model, the set of strings represented by paths satisfying some conditions is the same as the set of consistent superstrings for and . We also present algorithms for the shortest and the longest consistent superstring problems. Our algorithms solve the consistent superstring problems for all cases, including cases that are not considered in previous work. Moreover, our algorithms solve in polynomial time the consistent superstring problems for more cases than the previous algorithms. For the polynomially solvable cases, our algorithms are more efficient than the previous ones. PMID:24751868

  18. Adjusting protein graphs based on graph entropy.

    PubMed

    Peng, Sheng-Lung; Tsay, Yu-Wei

    2014-01-01

    Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid.

  19. Sequence design in lattice models by graph theoretical methods

    NASA Astrophysics Data System (ADS)

    Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.

    2001-01-01

    A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).

  20. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    PubMed

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J

    2010-01-01

    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  1. Semi-Markov Graph Dynamics

    PubMed Central

    Raberto, Marco; Rapallo, Fabio; Scalas, Enrico

    2011-01-01

    In this paper, we outline a model of graph (or network) dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs. PMID:21887245

  2. A Graph Decomposition Technique Based on a High-Density Clustering Model on Graphs.

    DTIC Science & Technology

    1980-07-01

    ADAO90 3A8 ALFRED P SLOAN SCHOOL OF MANAGEMNT CAMBRIDGE MA CEN-ETC FIG 12 GRAPH DECOMPOSITION TECHNIQUE BASEO ON A HIGH-DENSITY CLUSTER-ETC(U) JUL 0...ELEMENT.’ PROJECT, TASK Center for Information Systems Research AREAO G ORK UNIT NUMBERS Sloan School of Management, M.I.T. V Cambridge,_MA__02139...See Kernighan and Lin (1970), Lukes (1974, 1975), anc Christofides and Brooker (1976) for methods that operate under some size constraints on the

  3. A comparative study of theoretical graph models for characterizing structural networks of human brain.

    PubMed

    Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang

    2013-01-01

    Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  4. Graph Modeling for Quadratic Assignment Problems Associated with the Hypercube

    NASA Astrophysics Data System (ADS)

    Mittelmann, Hans; Peng, Jiming; Wu, Xiaolin

    2009-07-01

    In the paper we consider the quadratic assignment problem arising from channel coding in communications where one coefficient matrix is the adjacency matrix of a hypercube in a finite dimensional space. By using the geometric structure of the hypercube, we first show that there exist at least n different optimal solutions to the underlying QAPs. Moreover, the inherent symmetries in the associated hypercube allow us to obtain partial information regarding the optimal solutions and thus shrink the search space and improve all the existing QAP solvers for the underlying QAPs. Secondly, we use graph modeling technique to derive a new integer linear program (ILP) models for the underlying QAPs. The new ILP model has n(n-1) binary variables and O(n3 log(n)) linear constraints. This yields the smallest known number of binary variables for the ILP reformulation of QAPs. Various relaxations of the new ILP model are obtained based on the graphical characterization of the hypercube, and the lower bounds provided by the LP relaxations of the new model are analyzed and compared with what provided by several classical LP relaxations of QAPs in the literature.

  5. Cell-graph mining for breast tissue modeling and classification.

    PubMed

    Bilgin, Cagatay; Demir, Cigdem; Nagi, Chandandeep; Yener, Bulent

    2007-01-01

    We consider the problem of automated cancer diagnosis in the context of breast tissues. We present graph theoretical techniques that identify and compute quantitative metrics for tissue characterization and classification. We segment digital images of histopatological tissue samples using k-means algorithm. For each segmented image we generate different cell-graphs using positional coordinates of cells and surrounding matrix components. These cell-graphs have 500-2000 cells(nodes) with 1000-10000 links depending on the tissue and the type of cell-graph being used. We calculate a set of global metrics from cell-graphs and use them as the feature set for learning. We compare our technique, hierarchical cell graphs, with other techniques based on intensity values of images, Delaunay triangulation of the cells, the previous technique we proposed for brain tissue images and with the hybrid approach that we introduce in this paper. Among the compared techniques, hierarchical-graph approach gives 81.8% accuracy whereas we obtain 61.0%, 54.1% and 75.9% accuracy with intensity-based features, Delaunay triangulation and our previous technique, respectively.

  6. Estimating Causal Effects with Ancestral Graph Markov Models

    PubMed Central

    Malinsky, Daniel; Spirtes, Peter

    2017-01-01

    We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244

  7. Free energy disconnectivity graphs: Application to peptide models

    NASA Astrophysics Data System (ADS)

    Krivov, Sergei V.; Karplus, Martin

    2002-12-01

    Disconnectivity graphs are widely used for understanding the multidimensional potential energy surfaces (PES) of complex systems. Since entropic contributions to the free energy can be important, particularly for polypeptide chains and other polymers, conclusions concerning the equilibrium properties and kinetics of the system based on potential energy disconnectivity graphs (PE DG) can be misleading. We present an approach for constructing free energy surfaces (FES) and free energy disconnectivity graphs (FE DG) and give examples of their applications to peptides. They show that the FES and FE DG can differ significantly from the PES and PE DG.

  8. Graph-Switching Based Modeling of Mode Transition Constraints for Model Predictive Control of Hybrid Systems

    NASA Astrophysics Data System (ADS)

    Kobayashi, Koichi; Hiraishi, Kunihiko

    The model predictive/optimal control problem for hybrid systems is reduced to a mixed integer quadratic programming (MIQP) problem. However, the MIQP problem has one serious weakness, i.e., the computation time to solve the MIQP problem is too long for practical plants. For overcoming this technical issue, there are several approaches. In this paper, a modeling of mode transition constraints, which are expressed by a directed graph, is focused, and a new method to represent a directed graph is proposed. The effectiveness of the proposed method is shown by numerical examples on linear switched systems and piecewise linear systems.

  9. A Poisson model for random multigraphs

    PubMed Central

    Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

    2010-01-01

    Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

  10. Stochastic dynamics of model proteins on a directed graph.

    PubMed

    Bongini, Lorenzo; Casetti, Lapo; Livi, Roberto; Politi, Antonio; Torcini, Alessandro

    2009-06-01

    A method for reconstructing the potential energy landscape of simple polypeptidic chains is described. We show how to obtain a faithful representation of the energy landscape in terms of a suitable directed graph. Topological and dynamical indicators of the graph are shown to yield an effective estimate of the time scales associated with both folding and equilibration processes. This conclusion is drawn by comparing molecular dynamics simulations at constant temperature with the dynamics on the graph, defined as a temperature-dependent Markov process. The main advantage of the graph representation is that its dynamics can be naturally renormalized by collecting nodes into "hubs" while redefining their connectivity. We show that the dynamical properties at large time scales are preserved by the renormalization procedure. Moreover, we obtain clear indications that the heteropolymers exhibit common topological properties, at variance with the homopolymer, whose peculiar graph structure stems from its spatial homogeneity. In order to distinguish between "fast" and "slow" folders, one has to look at the kinetic properties of the corresponding directed graphs. In particular, we find that the average time needed to the fast folder for reaching its native configuration is two orders of magnitude smaller than its equilibration time while for the bad folder these time scales are comparable.

  11. Non-perturbative corrections to mean-field critical behavior: the spherical model on a spider-web graph

    NASA Astrophysics Data System (ADS)

    Balram, Ajit C.; Dhar, Deepak

    2012-03-01

    We consider the spherical model on a spider-web graph. This graph is effectively infinite dimensional, similar to the Bethe lattice, but has loops. We show that these lead to non-trivial corrections to the simple mean-field behavior. We first determine all normal modes of the coupled springs problem on this graph, using its large symmetry group. In the thermodynamic limit, the spectrum is a set of δ-functions, and all the modes are localized. The fractional number of modes with frequency less than ω varies as exp ( - C/ω) for ω tending to zero, where C is a constant. For an unbiased random walk on the vertices of this graph, this implies that the probability of return to the origin at time t varies as exp ( - C‧t1/3), for large t, where C‧ is a constant. For the spherical model, we show that while the critical exponents take the values expected from the mean-field theory, the free energy per site at temperature T, near and above the critical temperature Tc, also has an essential singularity of the type exp [ - K(T - Tc)-1/2].

  12. Coloring geographical threshold graphs

    SciTech Connect

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  13. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  14. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs

    NASA Astrophysics Data System (ADS)

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten

    2017-02-01

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.

  15. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs

    PubMed Central

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten

    2017-01-01

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience. PMID:28186182

  16. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  17. Graph theory as a proxy for spatially explicit population models in conservation planning.

    PubMed

    Minor, Emily S; Urban, Dean L

    2007-09-01

    Spatially explicit population models (SEPMs) are often considered the best way to predict and manage species distributions in spatially heterogeneous landscapes. However, they are computationally intensive and require extensive knowledge of species' biology and behavior, limiting their application in many cases. An alternative to SEPMs is graph theory, which has minimal data requirements and efficient algorithms. Although only recently introduced to landscape ecology, graph theory is well suited to ecological applications concerned with connectivity or movement. This paper compares the performance of graph theory to a SEPM in selecting important habitat patches for Wood Thrush (Hylocichla mustelina) conservation. We use both models to identify habitat patches that act as population sources and persistent patches and also use graph theory to identify patches that act as stepping stones for dispersal. Correlations of patch rankings were very high between the two models. In addition, graph theory offers the ability to identify patches that are very important to habitat connectivity and thus long-term population persistence across the landscape. We show that graph theory makes very similar predictions in most cases and in other cases offers insight not available from the SEPM, and we conclude that graph theory is a suitable and possibly preferable alternative to SEPMs for species conservation in heterogeneous landscapes.

  18. Automated Modeling and Simulation Using the Bond Graph Method for the Aerospace Industry

    NASA Technical Reports Server (NTRS)

    Granda, Jose J.; Montgomery, Raymond C.

    2003-01-01

    Bond graph modeling was originally developed in the late 1950s by the late Prof. Henry M. Paynter of M.I.T. Prof. Paynter acted well before his time as the main advantage of his creation, other than the modeling insight that it provides and the ability of effectively dealing with Mechatronics, came into fruition only with the recent advent of modern computer technology and the tools derived as a result of it, including symbolic manipulation, MATLAB, and SIMULINK and the Computer Aided Modeling Program (CAMPG). Thus, only recently have these tools been available allowing one to fully utilize the advantages that the bond graph method has to offer. The purpose of this paper is to help fill the knowledge void concerning its use of bond graphs in the aerospace industry. The paper first presents simple examples to serve as a tutorial on bond graphs for those not familiar with the technique. The reader is given the basic understanding needed to appreciate the applications that follow. After that, several aerospace applications are developed such as modeling of an arresting system for aircraft carrier landings, suspension models used for landing gears and multibody dynamics. The paper presents also an update on NASA's progress in modeling the International Space Station (ISS) using bond graph techniques, and an advanced actuation system utilizing shape memory alloys. The later covers the Mechatronics advantages of the bond graph method, applications that simultaneously involves mechanical, hydraulic, thermal, and electrical subsystem modeling.

  19. A Comparison of Video Modeling, Text-Based Instruction, and No Instruction for Creating Multiple Baseline Graphs in Microsoft Excel

    ERIC Educational Resources Information Center

    Tyner, Bryan C.; Fienup, Daniel M.

    2015-01-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance.…

  20. A comparison of video modeling, text-based instruction, and no instruction for creating multiple baseline graphs in Microsoft Excel.

    PubMed

    Tyner, Bryan C; Fienup, Daniel M

    2015-09-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed.

  1. Convex Optimization Methods for Graphs and Statistical Modeling

    DTIC Science & Technology

    2011-06-01

    extraneous polylogarithmic factors. In the next section we describe a new mechanism for estimating Gaussian widths, which provides near-optimal guarantees...so-called Quadratic Assignment Problem (QAP) [32]. Solving QAP is hard in general, because it includes as a special case the Hamiltonian cycle problem...only if the graph contains a Hamiltonian cycle. However there are well-studied spectral and semidefinite relaxations for QAP, which we discuss next

  2. graph-GPA: A graphical model for prioritizing GWAS results and investigating pleiotropic architecture

    PubMed Central

    Zhao, Hongyu

    2017-01-01

    Genome-wide association studies (GWAS) have identified tens of thousands of genetic variants associated with hundreds of phenotypes and diseases, which have provided clinical and medical benefits to patients with novel biomarkers and therapeutic targets. However, identification of risk variants associated with complex diseases remains challenging as they are often affected by many genetic variants with small or moderate effects. There has been accumulating evidence suggesting that different complex traits share common risk basis, namely pleiotropy. Recently, several statistical methods have been developed to improve statistical power to identify risk variants for complex traits through a joint analysis of multiple GWAS datasets by leveraging pleiotropy. While these methods were shown to improve statistical power for association mapping compared to separate analyses, they are still limited in the number of phenotypes that can be integrated. In order to address this challenge, in this paper, we propose a novel statistical framework, graph-GPA, to integrate a large number of GWAS datasets for multiple phenotypes using a hidden Markov random field approach. Application of graph-GPA to a joint analysis of GWAS datasets for 12 phenotypes shows that graph-GPA improves statistical power to identify risk variants compared to statistical methods based on smaller number of GWAS datasets. In addition, graph-GPA also promotes better understanding of genetic mechanisms shared among phenotypes, which can potentially be useful for the development of improved diagnosis and therapeutics. The R implementation of graph-GPA is currently available at https://dongjunchung.github.io/GGPA/. PMID:28212402

  3. Evolutionary Games of Multiplayer Cooperation on Graphs

    PubMed Central

    Arranz, Jordi; Traulsen, Arne

    2016-01-01

    There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946

  4. Bond graph modeling, simulation, and reflex control of the Mars planetary automatic vehicle

    NASA Astrophysics Data System (ADS)

    Amara, Maher; Friconneau, Jean Pierre; Micaelli, Alain

    1993-01-01

    The bond graph modeling, simulation, and reflex control study of the Planetary Automatic Vehicle are considered. A simulator derived from a complete bond graph model of the vehicle is presented. This model includes both knowledge and representation models of the mechanical structure, the floor contact, and the Mars site. The MACSYMEN (French acronym for aided design method of multi-energetic systems) is used and applied to study the input-output power transfers. The reflex control is then considered. Controller architecture and locomotion specificity are described. A numerical stage highlights some interesting results of the robot and the controller capabilities.

  5. a Graph Based Model for the Detection of Tidal Channels Using Marked Point Processes

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2015-08-01

    In this paper we propose a new method for the automatic extraction of tidal channels in digital terrain models (DTM) using a sampling approach based on marked point processes. In our model, the tidal channel system is represented by an undirected, acyclic graph. The graph is iteratively generated and fitted to the data using stochastic optimization based on a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampler and simulated annealing. The nodes of the graph represent junction points of the channel system and the edges straight line segments with a certain width in between. In each sampling step, the current configuration of nodes and edges is modified. The changes are accepted or rejected depending on the probability density function for the configuration which evaluates the conformity of the current status with a pre-defined model for tidal channels. In this model we favour high DTM gradient magnitudes at the edge borders and penalize a graph configuration consisting of non-connected components, overlapping segments and edges with atypical intersection angles. We present the method of our graph based model and show results for lidar data, which serve of a proof of concept of our approach.

  6. An Interactive Teaching System for Bond Graph Modeling and Simulation in Bioengineering

    ERIC Educational Resources Information Center

    Roman, Monica; Popescu, Dorin; Selisteanu, Dan

    2013-01-01

    The objective of the present work was to implement a teaching system useful in modeling and simulation of biotechnological processes. The interactive system is based on applications developed using 20-sim modeling and simulation software environment. A procedure for the simulation of bioprocesses modeled by bond graphs is proposed and simulators…

  7. Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests

    PubMed Central

    Li, Yilei; Zhu, Zhencai; Chen, Guoan

    2014-01-01

    The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS) for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests. PMID:24967428

  8. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  9. Non-Hermitian random matrix models: Free random variable approach

    SciTech Connect

    Janik, R.A.,; Nowak, M.A., ||; Papp, G.,; Wambach, J.,; Zahed, I., |

    1997-04-01

    Using the standard concepts of free random variables, we show that for a large class of non-Hermitian random matrix models, the support of the eigenvalue distribution follows from their Hermitian analogs using a conformal transformation. We also extend the concepts of free random variables to the class of non-Hermitian matrices, and apply them to the models discussed by Ginibre-Girko (elliptic ensemble) [J. Ginibre, J. Math. Phys. {bold 6}, 1440 (1965); V. L. Girko, {ital Spectral Theory of Random Matrices} (in Russian) (Nauka, Moscow, 1988)] and Mahaux-Weidenm{umlt u}ller (chaotic resonance scattering) [C. Mahaux and H. A. Weidenm{umlt u}ller, {ital Shell-model Approach to Nuclear Reactions} (North-Holland, Amsterdam, 1969)]. {copyright} {ital 1997} {ital The American Physical Society}

  10. Hierarchical graphs for better annotations of rule-based models of biochemical systems

    SciTech Connect

    Hu, Bin; Hlavacek, William

    2009-01-01

    In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of a molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.

  11. Connections between the Sznajd model with general confidence rules and graph theory.

    PubMed

    Timpanaro, André M; Prado, Carmen P C

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  12. Connections between the Sznajd model with general confidence rules and graph theory

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  13. Sparsified-dynamics modeling of discrete point vortices with graph theory

    NASA Astrophysics Data System (ADS)

    Taira, Kunihiko; Nair, Aditya

    2014-11-01

    We utilize graph theory to derive a sparsified interaction-based model that captures unsteady point vortex dynamics. The present model builds upon the Biot-Savart law and keeps the number of vortices (graph nodes) intact and reduces the number of inter-vortex interactions (graph edges). We achieve this reduction in vortex interactions by spectral sparsification of graphs. This approach drastically reduces the computational cost to predict the dynamical behavior, sharing characteristics of reduced-order models. Sparse vortex dynamics are illustrated through an example of point vortex clusters interacting amongst themselves. We track the centroids of the individual vortex clusters to evaluate the error in bulk motion of the point vortices in the sparsified setup. To further improve the accuracy in predicting the nonlinear behavior of the vortices, resparsification strategies are employed for the sparsified interaction-based models. The model retains the nonlinearity of the interaction and also conserves the invariants of discrete vortex dynamics; namely the Hamiltonian, linear impulse, and angular impulse as well as circulation. Work supported by US Army Research Office (W911NF-14-1-0386) and US Air Force Office of Scientific Research (YIP: FA9550-13-1-0183).

  14. Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs

    NASA Astrophysics Data System (ADS)

    Hor, A.-H.; Jadidi, A.; Sohn, G.

    2016-06-01

    In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.

  15. Analysis of Business Connections Utilizing Theory of Topology of Random Graphs

    NASA Astrophysics Data System (ADS)

    Trelewicz, Jennifer Q.; Volovich, Igor V.

    2006-03-01

    A business ecosystem is a system that describes interactions between organizations. In this paper, we build a theoretical framework that defines a model which can be used to analyze the business ecosystem. The basic concepts within the framework are organizations, business connections, and market, that are all defined in the paper. Many researchers analyze the performance and structure of business using the workflow of the business. Our work in business connections answers a different set of questions, concerning the monetary value in the business ecosystem, rather than the task-interaction view that is provided by workflow analysis. We apply methods for analysis of the topology of complex networks, characterized by the concepts of small path length, clustering, and scale-free degree distributions. To model the dynamics of the business ecosystem we analyze the notion of the state of an organization at a given instant of time. We point out that the notion of state in this case is fundamentally different from the concept of state of the system which is used in classical or quantum physics. To describe the state of the organization at a given time one has to know the probability of payments to contracts which in fact depend on the future behavior of the agents on the market. Therefore methods of p-adic analysis are appropriate to explore such a behavior. Microeconomic and macroeconomic factors are indivisible and moreover the actual state of the organization depends on the future. In this framework some simple models are analyzed in detail. Company strategy can be influenced by analysis of models, which can provide a probabilistic understanding of the market, giving degrees of predictability.

  16. New Graph Models and Algorithms for Detecting Salient Structures from Cluttered Images

    DTIC Science & Technology

    2010-02-24

    Development of graph models and algorithms to detect boundaries that show certain levels of symmetry, an important geometric property of many...Bookstein. Morphometric tools for landmark data. Cambridge University Press, 1991. [8] F. L. Bookstein. Principal warps: Thin-plate splines and the

  17. Physics Students' Performance Using Computational Modelling Activities to Improve Kinematics Graphs Interpretation

    ERIC Educational Resources Information Center

    Araujo, Ives Solano; Veit, Eliane Angela; Moreira, Marco Antonio

    2008-01-01

    The purpose of this study was to investigate undergraduate students' performance while exposed to complementary computational modelling activities to improve physics learning, using the software "Modellus." Interpretation of kinematics graphs was the physics topic chosen for investigation. The theoretical framework adopted was based on Halloun's…

  18. Graphing Reality

    NASA Astrophysics Data System (ADS)

    Beeken, Paul

    2014-11-01

    Graphing is an essential skill that forms the foundation of any physical science.1 Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations.2 Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary school instruction, the job of graphing skills falls heavily on physics teachers. By virtue of the nature of the topics we cover, it is our mission to develop this skill to the fine art that it is.

  19. Protein and gene model inference based on statistical modeling in k-partite graphs.

    PubMed

    Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter

    2010-07-06

    One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.

  20. Graphing Reality

    ERIC Educational Resources Information Center

    Beeken, Paul

    2014-01-01

    Graphing is an essential skill that forms the foundation of any physical science. Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations. Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary…

  1. Cycle graph analysis for 3D roof structure modelling: Concepts and performance

    NASA Astrophysics Data System (ADS)

    Perera, Gamage Sanka Nirodha; Maas, Hans-Gerd

    2014-07-01

    The paper presents a cycle graph analysis approach to the automatic reconstruction of 3D roof models from airborne laser scanner data. The nature of convergences of topological relations of plane adjacencies, allowing for the reconstruction of roof corner geometries with preserved topology, can be derived from cycles in roof topology graphs. The topology between roof adjacencies is defined in terms of ridge-lines and step-edges. In the proposed method, the input point cloud is first segmented and roof topology is derived while extracting roof planes from identified non-terrain segments. Orientation and placement regularities are applied on weakly defined edges using a piecewise regularization approach prior to the reconstruction, which assists in preserving symmetries in building geometry. Roof corners are geometrically modelled using the shortest closed cycles and the outermost cycle derived from roof topology graph in which external target graphs are no longer required. Based on test results, we show that the proposed approach can handle complexities with nearly 90% of the detected roof faces reconstructed correctly. The approach allows complex height jumps and various types of building roofs to be firmly reconstructed without prior knowledge of primitive building types.

  2. A Spectral Graph Regression Model for Learning Brain Connectivity of Alzheimer’s Disease

    PubMed Central

    Hu, Chenhui; Cheng, Lin; Sepulcre, Jorge; Johnson, Keith A.; Fakhri, Georges E.; Lu, Yue M.; Li, Quanzheng

    2015-01-01

    Understanding network features of brain pathology is essential to reveal underpinnings of neurodegenerative diseases. In this paper, we introduce a novel graph regression model (GRM) for learning structural brain connectivity of Alzheimer's disease (AD) measured by amyloid-β deposits. The proposed GRM regards 11C-labeled Pittsburgh Compound-B (PiB) positron emission tomography (PET) imaging data as smooth signals defined on an unknown graph. This graph is then estimated through an optimization framework, which fits the graph to the data with an adjustable level of uniformity of the connection weights. Under the assumed data model, results based on simulated data illustrate that our approach can accurately reconstruct the underlying network, often with better reconstruction than those obtained by both sample correlation and ℓ1-regularized partial correlation estimation. Evaluations performed upon PiB-PET imaging data of 30 AD and 40 elderly normal control (NC) subjects demonstrate that the connectivity patterns revealed by the GRM are easy to interpret and consistent with known pathology. Moreover, the hubs of the reconstructed networks match the cortical hubs given by functional MRI. The discriminative network features including both global connectivity measurements and degree statistics of specific nodes discovered from the AD and NC amyloid-beta networks provide new potential biomarkers for preclinical and clinical AD. PMID:26024224

  3. The role of reliability graph models in assuring dependable operation of complex hardware/software systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Davis, Gloria J.; Pedar, A.

    1991-01-01

    The complexity of computer systems currently being designed for critical applications in the scientific, commercial, and military arenas requires the development of new techniques for utilizing models of system behavior in order to assure 'ultra-dependability'. The complexity of these systems, such as Space Station Freedom and the Air Traffic Control System, stems from their highly integrated designs containing both hardware and software as critical components. Reliability graph models, such as fault trees and digraphs, are used frequently to model hardware systems. Their applicability for software systems has also been demonstrated for software safety analysis and the analysis of software fault tolerance. This paper discusses further uses of graph models in the design and implementation of fault management systems for safety critical applications.

  4. Using a high-dimensional graph of semantic space to model relationships among words

    PubMed Central

    Jackson, Alice F.; Bolger, Donald J.

    2014-01-01

    The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD). PMID:24860525

  5. A graph model for preventing railway accidents based on the maximal information coefficient

    NASA Astrophysics Data System (ADS)

    Shao, Fubo; Li, Keping

    2017-01-01

    A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.

  6. Stability and optimization in structured population models on graphs.

    PubMed

    Colombo, Rinaldo M; Garavello, Mauro

    2015-04-01

    We prove existence and uniqueness of solutions, continuous dependence from the initial datum and stability with respect to the boundary condition in a class of initial--boundary value problems for systems of balance laws. The particular choice of the boundary condition allows to comprehend models with very different structures. In particular, we consider a juvenile-adult model, the problem of the optimal mating ratio and a model for the optimal management of biological resources. The stability result obtained allows to tackle various optimal management/control problems, providing sufficient conditions for the existence of optimal choices/controls.

  7. International Space Station Centrifuge Rotor Models A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach

    NASA Technical Reports Server (NTRS)

    Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.

    2006-01-01

    The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.

  8. Human connectome module pattern detection using a new multi-graph MinMax cut model.

    PubMed

    De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng

    2014-01-01

    Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.

  9. Intrinsic graph structure estimation using graph Laplacian.

    PubMed

    Noda, Atsushi; Hino, Hideitsu; Tatsuno, Masami; Akaho, Shotaro; Murata, Noboru

    2014-07-01

    A graph is a mathematical representation of a set of variables where some pairs of the variables are connected by edges. Common examples of graphs are railroads, the Internet, and neural networks. It is both theoretically and practically important to estimate the intensity of direct connections between variables. In this study, a problem of estimating the intrinsic graph structure from observed data is considered. The observed data in this study are a matrix with elements representing dependency between nodes in the graph. The dependency represents more than direct connections because it includes influences of various paths. For example, each element of the observed matrix represents a co-occurrence of events at two nodes or a correlation of variables corresponding to two nodes. In this setting, spurious correlations make the estimation of direct connection difficult. To alleviate this difficulty, a digraph Laplacian is used for characterizing a graph. A generative model of this observed matrix is proposed, and a parameter estimation algorithm for the model is also introduced. The notable advantage of the proposed method is its ability to deal with directed graphs, while conventional graph structure estimation methods such as covariance selections are applicable only to undirected graphs. The algorithm is experimentally shown to be able to identify the intrinsic graph structure.

  10. Formal modeling of Gene Ontology annotation predictions based on factor graphs

    NASA Astrophysics Data System (ADS)

    Spetale, Flavio; Murillo, Javier; Tapia, Elizabeth; Arce, Débora; Ponce, Sergio; Bulacio, Pilar

    2016-04-01

    Gene Ontology (GO) is a hierarchical vocabulary for gene product annotation. Its synergy with machine learning classification methods has been widely used for the prediction of protein functions. Current classification methods rely on heuristic solutions to check the consistency with some aspects of the underlying GO structure. In this work we formalize the GO is-a relationship through predicate logic. Moreover, an ontology model based on Forney Factor Graph (FFG) is shown on a general fragment of Cellular Component GO.

  11. DARPA Ensemble-Based Modeling Large Graphs & Applications to Social Networks

    DTIC Science & Technology

    2015-07-29

    processes on social networks. Specific connectivity schemes affect influence propagation and epidemic spread, and is also responsible for Web page...AFRL-OSR-VA-TR-2015-0212 DARPA EnsembleBased Modeling Large Graphs & Applications to Social Networks Zoltan Toroczkai UNIVERSITY OF NOTRE DAME DU LAC...for this collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data

  12. Structure-based Low-Rank Model with Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhihui; Xiao, Liang; Shao, Wenze; Yue, Dong; Li, Haibo

    2016-12-15

    Nonlocal image representation methods, including group-based sparse coding and BM3D, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves better performance than several state-of-the-art algorithms.

  13. Ranking Medical Subject Headings using a factor graph model.

    PubMed

    Wei, Wei; Demner-Fushman, Dina; Wang, Shuang; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2015-01-01

    Automatically assigning MeSH (Medical Subject Headings) to articles is an active research topic. Recent work demonstrated the feasibility of improving the existing automated Medical Text Indexer (MTI) system, developed at the National Library of Medicine (NLM). Encouraged by this work, we propose a novel data-driven approach that uses semantic distances in the MeSH ontology for automated MeSH assignment. Specifically, we developed a graphical model to propagate belief through a citation network to provide robust MeSH main heading (MH) recommendation. Our preliminary results indicate that this approach can reach high Mean Average Precision (MAP) in some scenarios.

  14. Graph theoretical analysis of the energy landscape of model polymers.

    PubMed

    Baiesi, Marco; Bongini, Lorenzo; Casetti, Lapo; Tattini, Lorenzo

    2009-07-01

    In systems characterized by a rough potential-energy landscape, local energetic minima and saddles define a network of metastable states whose topology strongly influences the dynamics. Changes in temperature, causing the merging and splitting of metastable states, have nontrivial effects on such networks and must be taken into account. We do this by means of a recently proposed renormalization procedure. This method is applied to analyze the topology of the network of metastable states for different polypeptidic sequences in a minimalistic polymer model. A smaller spectral dimension emerges as a hallmark of stability of the global energy minimum and highlights a nonobvious link between dynamic and thermodynamic properties.

  15. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.

  16. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  17. The Random-Effect DINA Model

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…

  18. Graph Library

    SciTech Connect

    Schulz, Martin; Arnold, Dorian

    2007-06-12

    GraphLib is a support library used by other tools to create, manipulate, store, and export graphs. It provides a simple interface to specifS’ arbitrary directed and undirected graphs by adding nodes and edges. Each node and edge can be associated with a set of attributes describing size, color, and shape. Once created, graphs can be manipulated using a set of graph analysis algorithms, including merge, prune, and path coloring operations. GraphLib also has the ability to export graphs into various open formats such as DOT and GML.

  19. A geometric graph model for citation networks of exponentially growing scientific papers

    NASA Astrophysics Data System (ADS)

    Xie, Zheng; Ouyang, Zhenzheng; Liu, Qi; Li, Jianping

    2016-08-01

    In citation networks, the content relativity of papers is a precondition of engendering citations, which is hard to model by a topological graph. A geometric graph is proposed to predict some features of the citation networks with exponentially growing papers, which addresses the precondition by using coordinates of nodes to model the research contents of papers, and geometric distances between nodes to diversities of research contents between papers. Citations between modeled papers are drawn according to a geometric rule, which addresses the precondition as well as some other factors engendering citations, namely academic influences of papers, aging of those influences, and incomplete copying of references. Instead of cumulative advantage of degree, the model illustrates that the scale-free property of modeled networks arises from the inhomogeneous academic influences of modeled papers. The model can also reproduce some other statistical features of citation networks, e.g. in- and out-assortativities, which show the model provides a suitable tool to understand some aspects of citation networks by geometry.

  20. Random-effects models for longitudinal data

    SciTech Connect

    Laird, N.M.; Ware, J.H.

    1982-12-01

    Models for the analysis of longitudinal data must recognize the relationship between serial observations on the same unit. Multivariate models with general covariance structure are often difficult to apply to highly unbalanced data, whereas two-stage random-effects models can be used easily. In two-stage models, the probability distributions for the response vectors of different individuals belong to a single family, but some random-effects parameters vary across individuals, with a distribution specified at the second stage. A general family of models is discussed, which includes both growth models and repeated-measures models as special cases. A unified approach to fitting these models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed. Two examples are taken from a current epidemiological study of the health effects of air pollution.

  1. Guidelines for a graph-theoretic implementation of structural equation modeling

    USGS Publications Warehouse

    Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William

    2012-01-01

    Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for

  2. Evolutionary stability on graphs

    PubMed Central

    Ohtsuki, Hisashi; Nowak, Martin A.

    2008-01-01

    Evolutionary stability is a fundamental concept in evolutionary game theory. A strategy is called an evolutionarily stable strategy (ESS), if its monomorphic population rejects the invasion of any other mutant strategy. Recent studies have revealed that population structure can considerably affect evolutionary dynamics. Here we derive the conditions of evolutionary stability for games on graphs. We obtain analytical conditions for regular graphs of degree k > 2. Those theoretical predictions are compared with computer simulations for random regular graphs and for lattices. We study three different update rules: birth-death (BD), death-birth (DB), and imitation (IM) updating. Evolutionary stability on sparse graphs does not imply evolutionary stability in a well-mixed population, nor vice versa. We provide a geometrical interpretation of the ESS condition on graphs. PMID:18295801

  3. Parabolic Anderson Model in a Dynamic Random Environment: Random Conductances

    NASA Astrophysics Data System (ADS)

    Erhard, D.; den Hollander, F.; Maillard, G.

    2016-06-01

    The parabolic Anderson model is defined as the partial differential equation ∂ u( x, t)/ ∂ t = κ Δ u( x, t) + ξ( x, t) u( x, t), x ∈ ℤ d , t ≥ 0, where κ ∈ [0, ∞) is the diffusion constant, Δ is the discrete Laplacian, and ξ is a dynamic random environment that drives the equation. The initial condition u( x, 0) = u 0( x), x ∈ ℤ d , is typically taken to be non-negative and bounded. The solution of the parabolic Anderson equation describes the evolution of a field of particles performing independent simple random walks with binary branching: particles jump at rate 2 d κ, split into two at rate ξ ∨ 0, and die at rate (- ξ) ∨ 0. In earlier work we looked at the Lyapunov exponents λ p(κ ) = limlimits _{tto ∞} 1/t log {E} ([u(0,t)]p)^{1/p}, quad p in {N} , qquad λ 0(κ ) = limlimits _{tto ∞} 1/2 log u(0,t). For the former we derived quantitative results on the κ-dependence for four choices of ξ : space-time white noise, independent simple random walks, the exclusion process and the voter model. For the latter we obtained qualitative results under certain space-time mixing conditions on ξ. In the present paper we investigate what happens when κΔ is replaced by Δ𝓚, where 𝓚 = {𝓚( x, y) : x, y ∈ ℤ d , x ˜ y} is a collection of random conductances between neighbouring sites replacing the constant conductances κ in the homogeneous model. We show that the associated annealed Lyapunov exponents λ p (𝓚), p ∈ ℕ, are given by the formula λ p({K} ) = {sup} {λ p(κ ) : κ in {Supp} ({K} )}, where, for a fixed realisation of 𝓚, Supp(𝓚) is the set of values taken by the 𝓚-field. We also show that for the associated quenched Lyapunov exponent λ 0(𝓚) this formula only provides a lower bound, and we conjecture that an upper bound holds when Supp(𝓚) is replaced by its convex hull. Our proof is valid for three classes of reversible ξ, and for all 𝓚

  4. Synchronization in the random-field Kuramoto model on complex networks

    NASA Astrophysics Data System (ADS)

    Lopes, M. A.; Lopes, E. M.; Yoon, S.; Mendes, J. F. F.; Goltsev, A. V.

    2016-07-01

    We study the impact of random pinning fields on the emergence of synchrony in the Kuramoto model on complete graphs and uncorrelated random complex networks. We consider random fields with uniformly distributed directions and homogeneous and heterogeneous (Gaussian) field magnitude distribution. In our analysis, we apply the Ott-Antonsen method and the annealed-network approximation to find the critical behavior of the order parameter. In the case of homogeneous fields, we find a tricritical point above which a second-order phase transition gives place to a first-order phase transition when the network is either fully connected or scale-free with the degree exponent γ >5 . Interestingly, for scale-free networks with 2 <γ ≤5 , the phase transition is of second-order at any field magnitude, except for degree distributions with γ =3 when the transition is of infinite order at Kc=0 independent of the random fields. Contrary to the Ising model, even strong Gaussian random fields do not suppress the second-order phase transition in both complete graphs and scale-free networks, although the fields increase the critical coupling for γ >3 . Our simulations support these analytical results.

  5. Mathematic Modeling of Complex Hydraulic Machinery Systems When Evaluating Reliability Using Graph Theory

    NASA Astrophysics Data System (ADS)

    Zemenkova, M. Yu; Shipovalov, A. N.; Zemenkov, Yu D.

    2016-04-01

    The main technological equipment of pipeline transport of hydrocarbons are hydraulic machines. During transportation of oil mainly used of centrifugal pumps, designed to work in the “pumping station-pipeline” system. Composition of a standard pumping station consists of several pumps, complex hydraulic piping. The authors have developed a set of models and algorithms for calculating system reliability of pumps. It is based on the theory of reliability. As an example, considered one of the estimation methods with the application of graph theory.

  6. Randomly stopped sums: models and psychological applications

    PubMed Central

    Smithson, Michael; Shou, Yiyun

    2014-01-01

    This paper describes an approach to modeling the sums of a continuous random variable over a number of measurement occasions when the number of occasions also is a random variable. A typical example is summing the amounts of time spent attending to pieces of information in an information search task leading to a decision to obtain the total time taken to decide. Although there is a large literature on randomly stopped sums in financial statistics, it is largely absent from psychology. The paper begins with the standard modeling approaches used in financial statistics, and then extends them in two ways. First, the randomly stopped sums are modeled as “life distributions” such as the gamma or log-normal distribution. A simulation study investigates Type I error rate accuracy and power for gamma and log-normal versions of this model. Second, a Bayesian hierarchical approach is used for constructing an appropriate general linear model of the sums. Model diagnostics are discussed, and three illustrations are presented from real datasets. PMID:25426090

  7. Quantum walks on quotient graphs

    SciTech Connect

    Krovi, Hari; Brun, Todd A.

    2007-06-15

    A discrete-time quantum walk on a graph {gamma} is the repeated application of a unitary evolution operator to a Hilbert space corresponding to the graph. If this unitary evolution operator has an associated group of symmetries, then for certain initial states the walk will be confined to a subspace of the original Hilbert space. Symmetries of the original graph, given by its automorphism group, can be inherited by the evolution operator. We show that a quantum walk confined to the subspace corresponding to this symmetry group can be seen as a different quantum walk on a smaller quotient graph. We give an explicit construction of the quotient graph for any subgroup H of the automorphism group and illustrate it with examples. The automorphisms of the quotient graph which are inherited from the original graph are the original automorphism group modulo the subgroup H used to construct it. The quotient graph is constructed by removing the symmetries of the subgroup H from the original graph. We then analyze the behavior of hitting times on quotient graphs. Hitting time is the average time it takes a walk to reach a given final vertex from a given initial vertex. It has been shown in earlier work [Phys. Rev. A 74, 042334 (2006)] that the hitting time for certain initial states of a quantum walks can be infinite, in contrast to classical random walks. We give a condition which determines whether the quotient graph has infinite hitting times given that they exist in the original graph. We apply this condition for the examples discussed and determine which quotient graphs have infinite hitting times. All known examples of quantum walks with hitting times which are short compared to classical random walks correspond to systems with quotient graphs much smaller than the original graph; we conjecture that the existence of a small quotient graph with finite hitting times is necessary for a walk to exhibit a quantum speedup.

  8. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1988-08-01

    Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.

  9. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1987-01-01

    The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.

  10. Reaction spreading on graphs.

    PubMed

    Burioni, Raffaella; Chibbaro, Sergio; Vergni, Davide; Vulpiani, Angelo

    2012-11-01

    We study reaction-diffusion processes on graphs through an extension of the standard reaction-diffusion equation starting from first principles. We focus on reaction spreading, i.e., on the time evolution of the reaction product M(t). At variance with pure diffusive processes, characterized by the spectral dimension d{s}, the important quantity for reaction spreading is found to be the connectivity dimension d{l}. Numerical data, in agreement with analytical estimates based on the features of n independent random walkers on the graph, show that M(t)∼t{d{l}}. In the case of Erdös-Renyi random graphs, the reaction product is characterized by an exponential growth M(t)e{αt} with α proportional to ln(k), where (k) is the average degree of the graph.

  11. Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids

    PubMed Central

    José, Marco V.; Morgado, Eberto R.; Guimarães, Romeu Cardoso; Zamudio, Gabriel S.; de Farías, Sávio Torres; Bobadilla, Juan R.; Sosa, Daniela

    2014-01-01

    Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state. PMID:25370377

  12. The aggregate path coupling method for the Potts model on bipartite graph

    NASA Astrophysics Data System (ADS)

    Hernández, José C.; Kovchegov, Yevgeniy; Otto, Peter T.

    2017-02-01

    In this paper, we derive the large deviation principle for the Potts model on the complete bipartite graph Kn,n as n increases to infinity. Next, for the Potts model on Kn,n, we provide an extension of the method of aggregate path coupling that was originally developed in the work of Kovchegov, Otto, and Titus [J. Stat. Phys. 144(5), 1009-1027 (2011)] for the mean-field Blume-Capel model and in Kovchegov and Otto [J. Stat. Phys. 161(3), 553-576 (2015)] for a general mean-field setting that included the generalized Curie-Weiss-Potts model analyzed in the work of Jahnel et al. [Markov Process. Relat. Fields 20, 601-632 (2014)]. We use the aggregate path coupling method to identify and determine the threshold value βs separating the rapid and slow mixing regimes for the Glauber dynamics of the Potts model on Kn,n.

  13. Spectral fluctuations of quantum graphs

    SciTech Connect

    Pluhař, Z.; Weidenmüller, H. A.

    2014-10-15

    We prove the Bohigas-Giannoni-Schmit conjecture in its most general form for completely connected simple graphs with incommensurate bond lengths. We show that for graphs that are classically mixing (i.e., graphs for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap), the generating functions for all (P,Q) correlation functions for both closed and open graphs coincide (in the limit of infinite graph size) with the corresponding expressions of random-matrix theory, both for orthogonal and for unitary symmetry.

  14. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    PubMed

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  15. On the mixing time of geographical threshold graphs

    SciTech Connect

    Bradonjic, Milan

    2009-01-01

    In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). We specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).

  16. The effects of node exclusion on the centrality measures in graph models of interacting economic agents

    NASA Astrophysics Data System (ADS)

    Caetano, Marco Antonio Leonel; Yoneyama, Takashi

    2015-07-01

    This work concerns the study of the effects felt by a network as a whole when a specific node is perturbed. Many real world systems can be described by network models in which the interactions of the various agents can be represented as an edge of a graph. With a graph model in hand, it is possible to evaluate the effect of deleting some of its edges on the architecture and values of nodes of the network. Eventually a node may end up isolated from the rest of the network and an interesting problem is to have a quantitative measure of the impact of such an event. For instance, in the field of finance, the network models are very popular and the proposed methodology allows to carry out "what if" tests in terms of weakening the links between the economic agents, represented as nodes. The two main concepts employed in the proposed methodology are (i) the vibrational IC-Information Centrality, which can provide a measure of the relative importance of a particular node in a network and (ii) autocatalytic networks that can indicate the evolutionary trends of the network. Although these concepts were originally proposed in the context of other fields of knowledge, they were also found to be useful in analyzing financial networks. In order to illustrate the applicability of the proposed methodology, a case of study using the actual data comprising stock market indices of 12 countries is presented.

  17. Two-Stage Modelling Of Random Phenomena

    NASA Astrophysics Data System (ADS)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  18. Higher-order graph wavelets and sparsity on circulant graphs

    NASA Astrophysics Data System (ADS)

    Kotzagiannidis, Madeleine S.; Dragotti, Pier Luigi

    2015-08-01

    The notion of a graph wavelet gives rise to more advanced processing of data on graphs due to its ability to operate in a localized manner, across newly arising data-dependency structures, with respect to the graph signal and underlying graph structure, thereby taking into consideration the inherent geometry of the data. In this work, we tackle the problem of creating graph wavelet filterbanks on circulant graphs for a sparse representation of certain classes of graph signals. The underlying graph can hereby be data-driven as well as fixed, for applications including image processing and social network theory, whereby clusters can be modelled as circulant graphs, respectively. We present a set of novel graph wavelet filter-bank constructions, which annihilate higher-order polynomial graph signals (up to a border effect) defined on the vertices of undirected, circulant graphs, and are localised in the vertex domain. We give preliminary results on their performance for non-linear graph signal approximation and denoising. Furthermore, we provide extensions to our previously developed segmentation-inspired graph wavelet framework for non-linear image approximation, by incorporating notions of smoothness and vanishing moments, which further improve performance compared to traditional methods.

  19. Mining and Indexing Graph Databases

    ERIC Educational Resources Information Center

    Yuan, Dayu

    2013-01-01

    Graphs are widely used to model structures and relationships of objects in various scientific and commercial fields. Chemical molecules, proteins, malware system-call dependencies and three-dimensional mechanical parts are all modeled as graphs. In this dissertation, we propose to mine and index those graph data to enable fast and scalable search.…

  20. A combined crystal plasticity and graph-based vertex model of dynamic recrystallization at large deformations

    NASA Astrophysics Data System (ADS)

    Mellbin, Y.; Hallberg, H.; Ristinmaa, M.

    2015-06-01

    A mesoscale model of microstructure evolution is formulated in the present work by combining a crystal plasticity model with a graph-based vertex algorithm. This provides a versatile formulation capable of capturing finite-strain deformations, development of texture and microstructure evolution through recrystallization. The crystal plasticity model is employed in a finite element setting and allows tracing of stored energy build-up in the polycrystal microstructure and concurrent reorientation of the crystal lattices in the grains. This influences the progression of recrystallization as nucleation occurs at sites with sufficient stored energy and since the grain boundary mobility and energy is allowed to vary with crystallographic misorientation across the boundaries. The proposed graph-based vertex model describes the topological changes to the grain microstructure and keeps track of the grain inter-connectivity. Through homogenization, the macroscopic material response is also obtained. By the proposed modeling approach, grain structure evolution at large deformations as well as texture development are captured. This is in contrast to most other models of recrystallization which are usually limited by assumptions of one or the other of these factors. In simulation examples, the model is in the present study shown to capture the salient features of dynamic recrystallization, including the effects of varying initial grain size and strain rate on the transitions between single-peak and multiple-peak oscillating flow stress behavior. Also the development of recrystallization texture and the influence of different assumptions on orientation of recrystallization nuclei are investigated. Further, recrystallization kinetics are discussed and compared to classical JMAK theory. To promote computational efficiency, the polycrystal plasticity algorithm is parallelized through a GPU implementation that was recently proposed by the authors.

  1. Combining computational models, semantic annotations and simulation experiments in a graph database.

    PubMed

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models' structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/

  2. Representation of measured ejector characteristics by simple Eulerian bond graph models

    NASA Astrophysics Data System (ADS)

    Paynter, H. M.

    1985-12-01

    For some time, a purely fluidic type of pump or compressor has existed. This device possesses no solid moving parts. Such an ejector of jet-pump employs the momentum of a high velocity jet from the drive flow to entrain and pressurize a secondary suction flow stream. One application of such ejectors is related to an employment by the nuclear industry. Certain accidents have drawn attention to the grossly inadequate data base for ejectors operating under extreme pathological conditions including reverse flows. In connection with these developments, extensive tests were conducted. The present paper uses primarily data obtained in these tests. Attention is given to an analysis of the test results, aspects of bond graph representation, internal Eulerian flows satisfying a condition of Eulerian similitude, a canonical model, moduli functions, a near-perfect Eulerian device, and constant and variable paramter models. The considered tests and studies made it possible to establish a structured performance model for ejectors.

  3. Automatic detection of inundation-related change areas in TerraSAR-X data using Markov image modeling on irregular graphs

    NASA Astrophysics Data System (ADS)

    Martinis, Sandro; Twele, André

    2010-05-01

    The worldwide increasing occurrence of flooding and the short-time monitoring capability of the new generation of high resolution synthetic aperture radar (SAR) sensors (TerraSAR-X, COSMO-SkyMed) require accurate and automatic methods for the detection of flood dynamics. This is especially important for operational rapid mapping purposes where the near-real time provision of precise information about the extent of a disaster and its spatio-temporal evolution is of key importance to support decision makers and humanitarian relief organizations. A split based parametric thresholding approach under the generalized Gaussian assumption is developed on normalized change index data to automatically solve the three-class change detection problem in large-size images with small class a priori probabilities. The thresholding result is used for the initialization of a hybrid Markov model which integrates both scale-dependent and spatial context into the classification process by combining hierarchical with noncausal Markov image modeling on irregular graphs. Hierarchical Markov modeling is accomplished by hierarchical maximum a posteriori (HMAP) estimation using Markov Chains in scale. Since this method requires only one bottom-up and one top-down pass on the graph, it offers high computational performance. To reduce the computational demand of the iterative optimization process related to noncausal Markov image models, we define a partial Markov Random Field (MRF) approach, which is applied on a restricted region of the lowest level of the graph. The selection of this region is based on a confidence map generated by combining the HMAP labeling result from the different graph levels. The proposed unsupervised change detection method is applied on a bi-temporal TerraSAR-X StripMap data set (3 m pixel spacing) of a real flood event. The effectiveness of the hybrid Markov image model in comparison to the sole application of the HMAP estimation is evaluated. Additionally, the

  4. Combining computational models, semantic annotations and simulation experiments in a graph database

    PubMed Central

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  5. Conformational transitions in random heteropolymer models

    NASA Astrophysics Data System (ADS)

    Blavatska, Viktoria; Janke, Wolfhard

    2014-01-01

    We study the conformational properties of heteropolymers containing two types of monomers A and B, modeled as self-attracting self-avoiding random walks on a regular lattice. Such a model can describe in particular the sequences of hydrophobic and hydrophilic residues in proteins [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] and polyampholytes with oppositely charged groups [Y. Kantor and M. Kardar, Europhys. Lett. 28, 169 (1994)]. Treating the sequences of the two types of monomers as quenched random variables, we provide a systematic analysis of possible generalizations of this model. To this end we apply the pruned-enriched Rosenbluth chain-growth algorithm, which allows us to obtain the phase diagrams of extended and compact states coexistence as function of both the temperature and fraction of A and B monomers along the heteropolymer chain.

  6. Random walk in degree space and the time-dependent Watts-Strogatz model.

    PubMed

    Casa Grande, H L; Cotacallapa, M; Hase, M O

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  7. Random walk in degree space and the time-dependent Watts-Strogatz model

    NASA Astrophysics Data System (ADS)

    Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  8. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  9. Exploring and Making Sense of Large Graphs

    DTIC Science & Technology

    2015-08-01

    our fast algorithmic methodologies, we also contribute graph-theoretical ideas and models, and real-world applications in two main areas ??? Single ...Graph Exploration: We show how to interpretably summarize a single graph by identifying its important graph structures. We complement summarization with...effectively learn information about the unknown entities. ??? Multiple-Graph Exploration: We extend the idea of single -graph summarization to time

  10. Optimized Graph Search Using Multi-Level Graph Clustering

    NASA Astrophysics Data System (ADS)

    Kala, Rahul; Shukla, Anupam; Tiwari, Ritu

    Graphs find a variety of use in numerous domains especially because of their capability to model common problems. The social networking graphs that are used for social networking analysis, a feature given by various social networking sites are an example of this. Graphs can also be visualized in the search engines to carry search operations and provide results. Various searching algorithms have been developed for searching in graphs. In this paper we propose that the entire network graph be clustered. The larger graphs are clustered to make smaller graphs. These smaller graphs can again be clustered to further reduce the size of graph. The search is performed on the smallest graph to identify the general path, which may be further build up to actual nodes by working on the individual clusters involved. Since many searches are carried out on the same graph, clustering may be done once and the data may be used for multiple searches over the time. If the graph changes considerably, only then we may re-cluster the graph.

  11. Graph ensemble boosting for imbalanced noisy graph stream classification.

    PubMed

    Pan, Shirui; Wu, Jia; Zhu, Xingquan; Zhang, Chengqi

    2015-05-01

    Many applications involve stream data with structural dependency, graph representations, and continuously increasing volumes. For these applications, it is very common that their class distributions are imbalanced with minority (or positive) samples being only a small portion of the population, which imposes significant challenges for learning models to accurately identify minority samples. This problem is further complicated with the presence of noise, because they are similar to minority samples and any treatment for the class imbalance may falsely focus on the noise and result in deterioration of accuracy. In this paper, we propose a classification model to tackle imbalanced graph streams with noise. Our method, graph ensemble boosting, employs an ensemble-based framework to partition graph stream into chunks each containing a number of noisy graphs with imbalanced class distributions. For each individual chunk, we propose a boosting algorithm to combine discriminative subgraph pattern selection and model learning as a unified framework for graph classification. To tackle concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the boosting framework can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-life imbalanced graph streams demonstrate clear benefits of our boosting design for handling imbalanced noisy graph stream.

  12. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  13. Figure-Ground Segmentation Using Factor Graphs.

    PubMed

    Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr

    2009-06-04

    Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation.We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach.

  14. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  15. Assortativity of complementary graphs

    NASA Astrophysics Data System (ADS)

    Wang, H.; Winterbach, W.; van Mieghem, P.

    2011-09-01

    Newman's measure for (dis)assortativity, the linear degree correlationρD, is widely studied although analytic insight into the assortativity of an arbitrary network remains far from well understood. In this paper, we derive the general relation (2), (3) and Theorem 1 between the assortativity ρD(G) of a graph G and the assortativityρD(Gc) of its complement Gc. Both ρD(G) and ρD(Gc) are linearly related by the degree distribution in G. When the graph G(N,p) possesses a binomial degree distribution as in the Erdős-Rényi random graphs Gp(N), its complementary graph Gpc(N) = G1-p(N) follows a binomial degree distribution as in the Erdős-Rényi random graphs G1-p(N). We prove that the maximum and minimum assortativity of a class of graphs with a binomial distribution are asymptotically antisymmetric: ρmax(N,p) = -ρmin(N,p) for N → ∞. The general relation (3) nicely leads to (a) the relation (10) and (16) between the assortativity range ρmax(G)-ρmin(G) of a graph with a given degree distribution and the range ρmax(Gc)-ρmin(Gc) of its complementary graph and (b) new bounds (6) and (15) of the assortativity. These results together with our numerical experiments in over 30 real-world complex networks illustrate that the assortativity range ρmax-ρmin is generally large in sparse networks, which underlines the importance of assortativity as a network characterizer.

  16. Random Walks in Model Brain Tissue

    NASA Astrophysics Data System (ADS)

    Grinberg, Farida; Farrher, Ezequiel; Oros-Peusquens, Ana-Maria; Shah, N. Jon

    2011-03-01

    The propagation of water molecules in the brain and the corresponding NMR response are affected by many factors such as compartmentalization, restrictions and anisotropy imposed by the cellular microstructure. Interfacial interactions with cell membranes and exchange additionally come into play. Due to the complexity of the underlying factors, a differentiation between the various contributions to the average NMR signal in in vivo studies represents a difficult task. In this work we perform random-walk Monte Carlo simulations in well-defined model systems aiming at establishing quantitative relations between dynamics and microstructure. The results are compared with experimental data obtained for artificial anisotropic model systems.

  17. Effect of random field disorder on the first order transition in p-spin interaction model

    NASA Astrophysics Data System (ADS)

    Sumedha; Singh, Sushant K.

    2016-01-01

    We study the random field p-spin model with Ising spins on a fully connected graph using the theory of large deviations in this paper. This is a good model to study the effect of quenched random field on systems which have a sharp first order transition in the pure state. For p = 2, the phase-diagram of the model, for bimodal distribution of the random field, has been well studied and is known to undergo a continuous transition for lower values of the random field (h) and a first order transition beyond a threshold, htp(≈ 0.439) . We find the phase diagram of the model, for all p ≥ 2, with bimodal random field distribution, using large deviation techniques. We also look at the fluctuations in the system by calculating the magnetic susceptibility. For p = 2, beyond the tricritical point in the regime of first order transition, we find that for htp < h < 0.447, magnetic susceptibility increases rapidly (even though it never diverges) as one approaches the transition from the high temperature side. On the other hand, for 0.447 < h ≤ 0.5, the high temperature behaviour is well described by the Curie-Weiss law. For all p ≥ 2, we find that for larger magnitudes of the random field (h >ho = 1 / p!), the system does not show ferromagnetic order even at zero temperature. We find that the magnetic susceptibility for p ≥ 3 is discontinuous at the transition point for h

  18. Learning graph matching.

    PubMed

    Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J

    2009-06-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.

  19. Global dynamic modeling of electro-hydraulic 3-UPS/S parallel stabilized platform by bond graph

    NASA Astrophysics Data System (ADS)

    Zhang, Lijie; Guo, Fei; Li, Yongquan; Lu, Wenjuan

    2016-08-01

    Dynamic modeling of a parallel manipulator(PM) is an important issue. A complete PM system is actually composed of multiple physical domains. As PMs are widely used in various fields, the importance of modeling the global dynamic model of the PM system becomes increasingly prominent. Currently there lacks further research in global dynamic modeling. A unified modeling approach for the multi-energy domains PM system is proposed based on bond graph and a global dynamic model of the 3-UPS/S parallel stabilized platform involving mechanical and electrical-hydraulic elements is built. Firstly, the screw bond graph theory is improved based on the screw theory, the modular joint model is modeled and the normalized dynamic model of the mechanism is established. Secondly, combined with the electro-hydraulic servo system model built by traditional bond graph, the global dynamic model of the system is obtained, and then the motion, force and power of any element can be obtained directly. Lastly, the experiments and simulations of the driving forces, pressure and flow are performed, and the results show that, the theoretical calculation results of the driving forces are in accord with the experimental ones, and the pressure and flow of the first limb and the third limb are symmetry with each other. The results are reasonable and verify the correctness and effectiveness of the model and the method. The proposed dynamic modeling method provides a reference for modeling of other multi-energy domains system which contains complex PM.

  20. Highlighting the structure-function relationship of the brain with the Ising model and graph theory.

    PubMed

    Das, T K; Abeyasinghe, P M; Crone, J S; Sosnowski, A; Laureys, S; Owen, A M; Soddu, A

    2014-01-01

    With the advent of neuroimaging techniques, it becomes feasible to explore the structure-function relationships in the brain. When the brain is not involved in any cognitive task or stimulated by any external output, it preserves important activities which follow well-defined spatial distribution patterns. Understanding the self-organization of the brain from its anatomical structure, it has been recently suggested to model the observed functional pattern from the structure of white matter fiber bundles. Different models which study synchronization (e.g., the Kuramoto model) or global dynamics (e.g., the Ising model) have shown success in capturing fundamental properties of the brain. In particular, these models can explain the competition between modularity and specialization and the need for integration in the brain. Graphing the functional and structural brain organization supports the model and can also highlight the strategy used to process and organize large amount of information traveling between the different modules. How the flow of information can be prevented or partially destroyed in pathological states, like in severe brain injured patients with disorders of consciousness or by pharmacological induction like in anaesthesia, will also help us to better understand how global or integrated behavior can emerge from local and modular interactions.

  1. Cascades on clique-based graphs.

    PubMed

    Hackett, Adam; Gleeson, James P

    2013-06-01

    We present an analytical approach to determining the expected cascade size in a broad range of dynamical models on the class of highly clustered random graphs introduced by Gleeson [J. P. Gleeson, Phys. Rev. E 80, 036107 (2009)]. A condition for the existence of global cascades is also derived. Applications of this approach include analyses of percolation, and Watts's model. We show how our techniques can be used to study the effects of in-group bias in cascades on social networks.

  2. Model reduction for stochastic CaMKII reaction kinetics in synapses by graph-constrained correlation dynamics.

    PubMed

    Johnson, Todd; Bartol, Tom; Sejnowski, Terrence; Mjolsness, Eric

    2015-06-18

    A stochastic reaction network model of Ca(2+) dynamics in synapses (Pepke et al PLoS Comput. Biol. 6 e1000675) is expressed and simulated using rule-based reaction modeling notation in dynamical grammars and in MCell. The model tracks the response of calmodulin and CaMKII to calcium influx in synapses. Data from numerically intensive simulations is used to train a reduced model that, out of sample, correctly predicts the evolution of interaction parameters characterizing the instantaneous probability distribution over molecular states in the much larger fine-scale models. The novel model reduction method, 'graph-constrained correlation dynamics', requires a graph of plausible state variables and interactions as input. It parametrically optimizes a set of constant coefficients appearing in differential equations governing the time-varying interaction parameters that determine all correlations between variables in the reduced model at any time slice.

  3. Graph and circuit theory connectivity models of conservation biological control agents.

    PubMed

    Koh, Insu; Rowe, Helen I; Holland, Jeffrey D

    2013-10-01

    The control of agricultural pests is an important ecosystem service provided by predacious insects. In Midwestern USA, areas of remnant tallgrass prairie and prairie restorations may serve as relatively undisturbed sources of natural predators, and smaller areas of non-crop habitats such as seminatural areas and conservation plantings (CP) may serve as stepping stones across landscapes dominated by intensive agriculture. However, little is known about the flow of beneficial insects across large habitat networks. We measured abundance of soybean aphids and predators in 15 CP and adjacent soybean fields. We tested two hypotheses: (1) landscape connectivity enhances the flow of beneficial insects; and (2) prairies act as a source of sustaining populations of beneficial insects in well-connected habitats, by using adaptations of graph and circuit theory, respectively. For graph connectivity, incoming fluxes to the 15 CP from connected habitats were measured using an area- and distance-weighted flux metric with a range of negative exponential dispersal kernels. Distance was weighted by the percentage of seminatural area within ellipse-shaped landscapes, the shape of which was determined with correlated random walks. For circuit connectivity, effective conductance from the prairie to the individual 15 CP was measured by regarding the flux as conductance in a circuit. We used these two connectivity measures to predict the abundance of natural enemies in the selected sites. The most abundant predators were Anthocoridae, followed by exotic Coccinellidae, and native Coccinellidae. Predator abundances were explained well by aphid abundance. However, only native Coccinellidae were influenced by the flux and conductance. Interestingly, exotic Coccinellidae were negatively related to the flux, and native Coccinellidae were highly influenced by the interaction between exotic Coccinellidae and aphids. Our area- and distance-weighted flux and the conductance variables showed better

  4. A Systematic Composite Service Design Modeling Method Using Graph-Based Theory

    PubMed Central

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358

  5. A systematic composite service design modeling method using graph-based theory.

    PubMed

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  6. Neurally and ocularly informed graph-based models for searching 3D environments

    NASA Astrophysics Data System (ADS)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  7. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  8. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  9. Robust deformable and occluded object tracking with dynamic graph.

    PubMed

    Cai, Zhaowei; Wen, Longyin; Lei, Zhen; Vasconcelos, Nuno; Li, Stan Z

    2014-12-01

    While some efforts have been paid to handle deformation and occlusion in visual tracking, they are still great challenges. In this paper, a dynamic graph-based tracker (DGT) is proposed to address these two challenges in a unified framework. In the dynamic target graph, nodes are the target local parts encoding appearance information, and edges are the interactions between nodes encoding inner geometric structure information. This graph representation provides much more information for tracking in the presence of deformation and occlusion. The target tracking is then formulated as tracking this dynamic undirected graph, which is also a matching problem between the target graph and the candidate graph. The local parts within the candidate graph are separated from the background with Markov random field, and spectral clustering is used to solve the graph matching. The final target state is determined through a weighted voting procedure according to the reliability of part correspondence, and refined with recourse to a foreground/background segmentation. An effective online updating mechanism is proposed to update the model, allowing DGT to robustly adapt to variations of target structure. Experimental results show improved performance over several state-of-the-art trackers, in various challenging scenarios.

  10. Graphing Predictions

    ERIC Educational Resources Information Center

    Connery, Keely Flynn

    2007-01-01

    Graphing predictions is especially important in classes where relationships between variables need to be explored and derived. In this article, the author describes how his students sketch the graphs of their predictions before they begin their investigations on two laboratory activities: Distance Versus Time Cart Race Lab and Resistance; and…

  11. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  12. Topological structure of dictionary graphs

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk; Krzemiński, Mark

    2009-09-01

    We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers—that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 103 and 104. A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.

  13. Reducing RANS Model Error Using Random Forest

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  14. Interactive graph-cut segmentation for fast creation of finite element models from clinical ct data for hip fracture prediction.

    PubMed

    Pauchard, Yves; Fitze, Thomas; Browarnik, Diego; Eskandari, Amiraslan; Pauchard, Irene; Enns-Bray, William; Pálsson, Halldór; Sigurdsson, Sigurdur; Ferguson, Stephen J; Harris, Tamara B; Gudnason, Vilmundur; Helgason, Benedikt

    2016-12-01

    In this study, we propose interactive graph cut image segmentation for fast creation of femur finite element (FE) models from clinical computed tomography scans for hip fracture prediction. Using a sample of N = 48 bone scans representing normal, osteopenic and osteoporotic subjects, the proximal femur was segmented using manual (gold standard) and graph cut segmentation. Segmentations were subsequently used to generate FE models to calculate overall stiffness and peak force in a sideways fall simulations. Results show that, comparable FE results can be obtained with the graph cut method, with a reduction from 20 to 2-5 min interaction time. Average differences between segmentation methods of 0.22 mm were not significantly correlated with differences in FE derived stiffness (R(2) = 0.08, p = 0.05) and weakly correlated to differences in FE derived peak force (R(2) = 0.16, p = 0.01). We further found that changes in automatically assigned boundary conditions as a consequence of small segmentation differences were significantly correlated with FE derived results. The proposed interactive graph cut segmentation software MITK-GEM is freely available online at https://simtk.org/home/mitk-gem .

  15. Kinetic Models with Randomly Perturbed Binary Collisions

    NASA Astrophysics Data System (ADS)

    Bassetti, Federico; Ladelli, Lucia; Toscani, Giuseppe

    2011-02-01

    We introduce a class of Kac-like kinetic equations on the real line, with general random collisional rules which, in some special cases, identify models for granular gases with a background heat bath (Carrillo et al. in Discrete Contin. Dyn. Syst. 24(1):59-81, 2009), and models for wealth redistribution in an agent-based market (Bisi et al. in Commun. Math. Sci. 7:901-916, 2009). Conditions on these collisional rules which guarantee both the existence and uniqueness of equilibrium profiles and their main properties are found. The characterization of these stationary states is of independent interest, since we show that they are stationary solutions of different evolution problems, both in the kinetic theory of rarefied gases (Cercignani et al. in J. Stat. Phys. 105:337-352, 2001; Villani in J. Stat. Phys. 124:781-822, 2006) and in the econophysical context (Bisi et al. in Commun. Math. Sci. 7:901-916, 2009).

  16. Scenario Graphs and Attack Graphs

    DTIC Science & Technology

    2004-04-14

    46 6.1 Vulnerability Analysis of a Network . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.2 Sandia Red Team Attack Graph...asymptotic bound. The test machine was a 1Ghz Pentium III with 1GB of RAM, running Red Hat Linux 7.3. Figure 4.1(a) plots running time of the implemen...host scanning tools network information vulnerability Attack Graph network Red

  17. Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph

    DTIC Science & Technology

    2014-07-01

    the generating function form of the Markov prop- agator of the random walk. This can be easily generalized to other models simply by specifying the...detailed information about the prop- agator than the bound on consensus. VI. CONCLUSIONS We have successfully derived exact solutions to the voter

  18. A random effects epidemic-type aftershock sequence model.

    PubMed

    Lin, Feng-Chang

    2011-04-01

    We consider an extension of the temporal epidemic-type aftershock sequence (ETAS) model with random effects as a special case of a well-known doubly stochastic self-exciting point process. The new model arises from a deterministic function that is randomly scaled by a nonnegative random variable, which is unobservable but assumed to follow either positive stable or one-parameter gamma distribution with unit mean. Both random effects models are of interest although the one-parameter gamma random effects model is more popular when modeling associated survival times. Our estimation is based on the maximum likelihood approach with marginalized intensity. The methods are shown to perform well in simulation experiments. When applied to an earthquake sequence on the east coast of Taiwan, the extended model with positive stable random effects provides a better model fit, compared to the original ETAS model and the extended model with one-parameter gamma random effects.

  19. Graph Theory

    SciTech Connect

    Sanfilippo, Antonio P.

    2005-12-27

    Graph theory is a branch of discrete combinatorial mathematics that studies the properties of graphs. The theory was pioneered by the Swiss mathematician Leonhard Euler in the 18th century, commenced its formal development during the second half of the 19th century, and has witnessed substantial growth during the last seventy years, with applications in areas as diverse as engineering, computer science, physics, sociology, chemistry and biology. Graph theory has also had a strong impact in computational linguistics by providing the foundations for the theory of features structures that has emerged as one of the most widely used frameworks for the representation of grammar formalisms.

  20. Evolutionary Dynamics on Degree-Heterogeneous Graphs

    NASA Astrophysics Data System (ADS)

    Antal, T.; Redner, S.; Sood, V.

    2006-05-01

    The evolution of two species with different fitness is investigated on degree-heterogeneous graphs. The population evolves either by one individual dying and being replaced by the offspring of a random neighbor (voter model dynamics) or by an individual giving birth to an offspring that takes over a random neighbor node (invasion process dynamics). The fixation probability for one species to take over a population of N individuals depends crucially on the dynamics and on the local environment. Starting with a single fitter mutant at a node of degree k, the fixation probability is proportional to k for voter model dynamics and to 1/k for invasion process dynamics.

  1. Linear Wegner estimate for alloy-type Schrödinger operators on metric graphs

    NASA Astrophysics Data System (ADS)

    Helm, Mario; Veselić, Ivan

    2007-09-01

    We study spectra of alloy-type random Schrödinger operators on metric graphs. For finite edge subsets we prove a Wegner estimate which is linear in the volume (i.e., the total length of the edges) and the length of the energy interval. The single site potential needs to have fixed sign; the metric graph does not need to have a periodic structure. A further result is the existence of the integrated density of states for ergodic random Hamiltonians on metric graphs with a Zν structure. For certain models the two above results together imply the Lipschitz continuity of the integrated density of states.

  2. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  3. A Study towards Building An Optimal Graph Theory Based Model For The Design of Tourism Website

    NASA Astrophysics Data System (ADS)

    Panigrahi, Goutam; Das, Anirban; Basu, Kajla

    2010-10-01

    Effective tourism website is a key to attract tourists from different parts of the world. Here we identify the factors of improving the effectiveness of website by considering it as a graph, where web pages including homepage are the nodes and hyperlinks are the edges between the nodes. In this model, the design constraints for building a tourism website are taken into consideration. Our objectives are to build a framework of an effective tourism website providing adequate level of information, service and also to enable the users to reach to the desired page by spending minimal loading time. In this paper an information hierarchy specifying the upper limit of outgoing link of a page has also been proposed. Following the hierarchy, the web developer can prepare an effective tourism website. Here loading time depends on page size and network traffic. We have assumed network traffic as uniform and the loading time is directly proportional with page size. This approach is done by quantifying the link structure of a tourism website. In this approach we also propose a page size distribution pattern of a tourism website.

  4. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  5. Medical image segmentation by combining graph cuts and oriented active appearance models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.

  6. Random matrix model of adiabatic quantum computing

    SciTech Connect

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-05-15

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.

  7. Prediction of Growth Factor-Dependent Cleft Formation During Branching Morphogenesis Using A Dynamic Graph-Based Growth Model.

    PubMed

    Dhulekar, Nimit; Ray, Shayoni; Yuan, Daniel; Baskaran, Abhirami; Oztan, Basak; Larsen, Melinda; Yener, Bulent

    2016-01-01

    This study considers the problem of describing and predicting cleft formation during the early stages of branching morphogenesis in mouse submandibular salivary glands (SMG) under the influence of varied concentrations of epidermal growth factors (EGF). Given a time-lapse video of a growing SMG, first we build a descriptive model that captures the underlying biological process and quantifies the ground truth. Tissue-scale (global) and morphological features related to regions of interest (local features) are used to characterize the biological ground truth. Second, we devise a predictive growth model that simulates EGF-modulated branching morphogenesis using a dynamic graph algorithm, which is driven by biological parameters such as EGF concentration, mitosis rate, and cleft progression rate. Given the initial configuration of the SMG, the evolution of the dynamic graph predicts the cleft formation, while maintaining the local structural characteristics of the SMG. We determined that higher EGF concentrations cause the formation of higher number of buds and comparatively shallow cleft depths. Third, we compared the prediction accuracy of our model to the Glazier-Graner-Hogeweg (GGH) model, an on-lattice Monte-Carlo simulation model, under a specific energy function parameter set that allows new rounds of de novo cleft formation. The results demonstrate that the dynamic graph model yields comparable simulations of gland growth to that of the GGH model with a significantly lower computational complexity. Fourth, we enhanced this model to predict the SMG morphology for an EGF concentration without the assistance of a ground truth time-lapse biological video data; this is a substantial benefit of our model over other similar models that are guided and terminated by information regarding the final SMG morphology. Hence, our model is suitable for testing the impact of different biological parameters involved with the process of branching morphogenesis in silico, while

  8. Prediction of Growth Factor Dependent Cleft Formation During Branching Morphogenesis Using A Dynamic Graph-Based Growth Model

    PubMed Central

    Dhulekar, Nimit; Ray, Shayoni; Yuan, Daniel; Baskaran, Abhirami; Oztan, Basak; Larsen, Melinda; Yener, Bülent

    2016-01-01

    This study considers the problem of describing and predicting cleft formation during the early stages of branching morphogenesis in mouse submandibular salivary glands (SMG) under the influence of varied concentrations of epidermal growth factors (EGF). Given a time-lapse video of a growing SMG, first we build a descriptive model that captures the underlying biological process and quantifies the ground truth. Tissue-scale (global) and morphological features related to regions of interest (local features) are used to characterize the biological ground truth. Second, we devise a predictive growth model that simulates EGF-modulated branching morphogenesis using a dynamic graph algorithm, which is driven by biological parameters such as EGF concentration, mitosis rate, and cleft progression rate. Given the initial configuration of the SMG, the evolution of the dynamic graph predicts the cleft formation, while maintaining the local structural characteristics of the SMG. We determined that higher EGF concentrations cause the formation of higher number of buds and comparatively shallow cleft depths. Third, we compared the prediction accuracy of our model to the Glazier-Graner-Hogeweg (GGH) model, an on-lattice Monte-Carlo simulation model, under a specific energy function parameter set that allows new rounds of de novo cleft formation. The results demonstrate that the dynamic graph model yields comparable simulations of gland growth to that of the GGH model with a significantly lower computational complexity. Fourth, we enhanced this model to predict the SMG morphology for an EGF concentration without the assistance of a ground truth time-lapse biological video data; this is a substantial benefit of our model over other similar models that are guided and terminated by information regarding the final SMG morphology. Hence, our model is suitable for testing the impact of different biological parameters involved with the process of branching morphogenesis in silico, while

  9. Graph Theory of Tower Tasks

    PubMed Central

    Hinz, Andreas M.

    2012-01-01

    The appropriate mathematical model for the problem space of tower transformation tasks is the state graph representing positions of discs or balls and their moves. Graph theoretical quantities like distance, eccentricities or degrees of vertices and symmetries of graphs support the choice of problems, the selection of tasks and the analysis of performance of subjects whose solution paths can be projected onto the graph. The mathematical model is also at the base of a computerized test tool to administer various types of tower tasks. PMID:22207419

  10. Test of Graphing and Graph Interpretation Skills.

    ERIC Educational Resources Information Center

    Hermann, J.

    This monograph is a test of graphing and graph interpretation skills which assesses performance on all the skills of graphing which are contained in the AAAS program, Science - A Process Approach. The testing includes construction of bar graphs, interpreting information on graphs, the use of the Cartesian coordinate system, making predictions from…

  11. Random-walk model of homologous recombination

    NASA Astrophysics Data System (ADS)

    Fujitani, Youhei; Kobayashi, Ichizo

    1995-12-01

    Interaction between two homologous (i.e., identical or nearly identical) DNA sequences leads to their homologous recombination in the cell. We present the following stochastic model to explain the dependence of the frequency of homologous recombination on the length of the homologous region. The branch point connecting the two DNAs in a reaction intermediate follows the random-walk process along the homology (N base-pairs). If the branch point reaches either of the homology ends, it bounds back to the homologous region at a probability of γ (reflection coefficient) and is destroyed at a probability of 1-γ. When γ is small, the frequency of homologous recombination is found to be proportional to N3 for smaller N and a linear function of N for larger N. The exponent of the nonlinear dependence for smaller N decreases from three as γ increases. When γ=1, only the linear dependence is left. These theoretical results can explain many experimental data in various systems. (c) 1995 The American Physical Society

  12. Model ecosystems with random nonlinear interspecies interactions

    NASA Astrophysics Data System (ADS)

    Santos, Danielle O. C.; Fontanari, José F.

    2004-12-01

    The principle of competitive exclusion in ecology establishes that two species living together cannot occupy the same ecological niche. Here we present a model ecosystem in which the species are described by a series of phenotypic characters and the strength of the competition between two species is given by a nondecreasing (modulating) function of the number of common characters. Using analytical tools of statistical mechanics we find that the ecosystem diversity, defined as the fraction of species that coexist at equilibrium, decreases as the complexity (i.e., number of characters) of the species increases, regardless of the modulating function. By considering both selective and random elimination of the links in the community web, we show that ecosystems composed of simple species are more robust than those composed of complex species. In addition, we show that the puzzling result that there exists either rich or poor ecosystems for a linear modulating function is not typical of communities in which the interspecies interactions are determined by a complementarity rule.

  13. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  14. The Random-Effect Generalized Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wu, Shiu-Lien

    2011-01-01

    Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…

  15. Quantifying randomness in real networks

    PubMed Central

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-01-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121

  16. Quantifying randomness in real networks

    NASA Astrophysics Data System (ADS)

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  17. Quantifying randomness in real networks.

    PubMed

    Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-20

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  18. Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis

    DTIC Science & Technology

    2015-01-01

    38 4.2.1 Ginzburg- Landau functional . . . . . . . . . . . . . . . . . 39 4.2.2 MBO scheme...45 4.4.1 Ginzburg- Landau relaxation of the discrete problem . . . . 46 4.4.2 MBO scheme, convex splitting, and spectral...authors of [8] introduced a binary semi-supervised segmentation method based on minimizing the Ginzburg- Landau functional on a graph. Inspired by [8], a

  19. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    SciTech Connect

    Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.; Star, Keith T.; Baker, Nathan A.

    2016-09-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  20. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  1. Filling in the Gaps: Modelling Incomplete CBL Data Using a Graphing Calculator.

    ERIC Educational Resources Information Center

    Swingle, David A.; Pachnowski, Lynne M.

    2003-01-01

    Discusses a real-world problem-solving lesson that emerged when a high school math teacher used a motion detector with a CBL and graphing calculator to obtain the bounce data of a ping-pong ball. Describes the lesson in which students collect bad data then fill in the missing parabolas that result using critical components of parabolas and…

  2. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    PubMed

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  3. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  4. GraphBench

    SciTech Connect

    Sukumar, Sreenivas R.; Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan

    2016-06-01

    GraphBench is a benchmark suite for graph pattern mining and graph analysis systems. The benchmark suite is a significant addition to conducting apples-apples comparison of graph analysis software (databases, in-memory tools, triple stores, etc.)

  5. Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems

    SciTech Connect

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan; Sukumar, Sreenivas Rangan; Vatsavai, Raju

    2016-01-01

    Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons: multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.

  6. Trapping in the Random Conductance Model

    NASA Astrophysics Data System (ADS)

    Biskup, M.; Louidor, O.; Rozinov, A.; Vandenberg-Rodes, A.

    2013-01-01

    We consider random walks on ℤ d among nearest-neighbor random conductances which are i.i.d., positive, bounded uniformly from above but whose support extends all the way to zero. Our focus is on the detailed properties of the paths of the random walk conditioned to return back to the starting point at time 2 n. We show that in the situations when the heat kernel exhibits subdiffusive decay—which is known to occur in dimensions d≥4—the walk gets trapped for a time of order n in a small spatial region. This shows that the strategy used earlier to infer subdiffusive lower bounds on the heat kernel in specific examples is in fact dominant. In addition, we settle a conjecture concerning the worst possible subdiffusive decay in four dimensions.

  7. Line graphs for fractals

    NASA Astrophysics Data System (ADS)

    Warchalowski, Wiktor; Krawczyk, Malgorzata J.

    2017-03-01

    We found the Lindenmayer systems for line graphs built on selected fractals. We show that the fractal dimension of such obtained graphs in all analysed cases is the same as for their original graphs. Both for the original graphs and for their line graphs we identified classes of nodes which reflect symmetry of the graph.

  8. Detecting labor using graph theory on connectivity matrices of uterine EMG.

    PubMed

    Al-Omar, S; Diab, A; Nader, N; Khalil, M; Karlsson, B; Marque, C

    2015-08-01

    Premature labor is one of the most serious health problems in the developed world. One of the main reasons for this is that no good way exists to distinguish true labor from normal pregnancy contractions. The aim of this paper is to investigate if the application of graph theory techniques to multi-electrode uterine EMG signals can improve the discrimination between pregnancy contractions and labor. To test our methods we first applied them to synthetic graphs where we detected some differences in the parameters results and changes in the graph model from pregnancy-like graphs to labor-like graphs. Then, we applied the same methods to real signals. We obtained the best differentiation between pregnancy and labor through the same parameters. Major improvements in differentiating between pregnancy and labor were obtained using a low pass windowing preprocessing step. Results show that real graphs generally became more organized when moving from pregnancy, where the graph showed random characteristics, to labor where the graph became a more small-world like graph.

  9. Modelling graph dynamics of flaring active regions using SDO/HMI data

    NASA Astrophysics Data System (ADS)

    Lukyanov, A. D.; Makarenko, N. G.; Knyazeva, I. S.

    2017-01-01

    Large solar flares define parameters of space weather near the Earth and normal operation of spacecrafts substantially depends on the cosmic particles flux of solar wind. Therefore, search for large flares predictors is an important problem. We propose to approximate topology of the magnetic field of active region by graph, whose vertices are minima and maxima of a scalar field and edges form a so-called critical net. Localization and number of critical points change during the process of evolution and, therefore, it is possible to track dynamic regimes of active region by considering dynamics of graphs. Numerical characteristic of the critical net is a spectrum of eigenvalues of its discrete Laplacian. We present examples which show that, apparently, Laplacian spectrum is closely related to flaring activity. It will allow us to use critical net in prognostic systems for prediction of large solar flares.

  10. High-precision percolation thresholds and Potts-model critical manifolds from graph polynomials

    NASA Astrophysics Data System (ADS)

    >Jesper Lykke Jacobsen,

    2014-04-01

    The critical curves of the q-state Potts model can be determined exactly for regular two-dimensional lattices G that are of the three-terminal type. This comprises the square, triangular, hexagonal and bow-tie lattices. Jacobsen and Scullard have defined a graph polynomial PB(q, v) that gives access to the critical manifold for general lattices. It depends on a finite repeating part of the lattice, called the basis B, and its real roots in the temperature variable v = eK - 1 provide increasingly accurate approximations to the critical manifolds upon increasing the size of B. Using transfer matrix techniques, these authors computed PB(q, v) for large bases (up to 243 edges), obtaining determinations of the ferromagnetic critical point vc > 0 for the (4, 82), kagome, and (3, 122) lattices to a precision (of the order 10-8) slightly superior to that of the best available Monte Carlo simulations. In this paper we describe a more efficient transfer matrix approach to the computation of PB(q, v) that relies on a formulation within the periodic Temperley-Lieb algebra. This makes possible computations for substantially larger bases (up to 882 edges), and the precision on vc is hence taken to the range 10-13. We further show that a large variety of regular lattices can be cast in a form suitable for this approach. This includes all Archimedean lattices, their duals and their medials. For all these lattices we tabulate high-precision estimates of the bond percolation thresholds pc and Potts critical points vc. We also trace and discuss the full Potts critical manifold in the (q, v) plane, paying special attention to the antiferromagnetic region v < 0. Finally, we adapt the technique to site percolation as well, and compute the polynomials PB(p) for certain Archimedean and dual lattices (those having only cubic and quartic vertices), using very large bases (up to 243 vertices). This produces the site percolation thresholds pc to a precision of the order of 10-9.

  11. Helping Students Make Sense of Graphs: An Experimental Trial of SmartGraphs Software

    NASA Astrophysics Data System (ADS)

    Zucker, Andrew; Kay, Rachel; Staudt, Carolyn

    2014-06-01

    Graphs are commonly used in science, mathematics, and social sciences to convey important concepts; yet students at all ages demonstrate difficulties interpreting graphs. This paper reports on an experimental study of free, Web-based software called SmartGraphs that is specifically designed to help students overcome their misconceptions regarding graphs. SmartGraphs allows students to interact with graphs and provides hints and scaffolding to help students, if they need help. SmartGraphs activities can be authored to be useful in teaching and learning a variety of topics that use graphs (such as slope, velocity, half-life, and global warming). A 2-year experimental study in physical science classrooms was conducted with dozens of teachers and thousands of students. In the first year, teachers were randomly assigned to experimental or control conditions. Data show that students of teachers who use SmartGraphs as a supplement to normal instruction make greater gains understanding graphs than control students studying the same content using the same textbooks, but without SmartGraphs. Additionally, teachers believe that the SmartGraphs activities help students meet learning goals in the physical science course, and a great majority reported they would use the activities with students again. In the second year of the study, several specific variations of SmartGraphs were researched to help determine what makes SmartGraphs effective.

  12. A Model for Random Student Drug Testing

    ERIC Educational Resources Information Center

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  13. A novel model for DNA sequence similarity analysis based on graph theory.

    PubMed

    Qi, Xingqin; Wu, Qin; Zhang, Yusen; Fuller, Eddie; Zhang, Cun-Quan

    2011-01-01

    Determination of sequence similarity is one of the major steps in computational phylogenetic studies. As we know, during evolutionary history, not only DNA mutations for individual nucleotide but also subsequent rearrangements occurred. It has been one of major tasks of computational biologists to develop novel mathematical descriptors for similarity analysis such that various mutation phenomena information would be involved simultaneously. In this paper, different from traditional methods (eg, nucleotide frequency, geometric representations) as bases for construction of mathematical descriptors, we construct novel mathematical descriptors based on graph theory. In particular, for each DNA sequence, we will set up a weighted directed graph. The adjacency matrix of the directed graph will be used to induce a representative vector for DNA sequence. This new approach measures similarity based on both ordering and frequency of nucleotides so that much more information is involved. As an application, the method is tested on a set of 0.9-kb mtDNA sequences of twelve different primate species. All output phylogenetic trees with various distance estimations have the same topology, and are generally consistent with the reported results from early studies, which proves the new method's efficiency; we also test the new method on a simulated data set, which shows our new method performs better than traditional global alignment method when subsequent rearrangements happen frequently during evolutionary history.

  14. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  15. Algebraic distance on graphs.

    SciTech Connect

    Chen, J.; Safro, I.

    2011-01-01

    Measuring the connection strength between a pair of vertices in a graph is one of the most important concerns in many graph applications. Simple measures such as edge weights may not be sufficient for capturing the effects associated with short paths of lengths greater than one. In this paper, we consider an iterative process that smooths an associated value for nearby vertices, and we present a measure of the local connection strength (called the algebraic distance; see [D. Ron, I. Safro, and A. Brandt, Multiscale Model. Simul., 9 (2011), pp. 407-423]) based on this process. The proposed measure is attractive in that the process is simple, linear, and easily parallelized. An analysis of the convergence property of the process reveals that the local neighborhoods play an important role in determining the connectivity between vertices. We demonstrate the practical effectiveness of the proposed measure through several combinatorial optimization problems on graphs and hypergraphs.

  16. DNA Rearrangements through Spatial Graphs

    NASA Astrophysics Data System (ADS)

    Jonoska, Nataša; Saito, Masahico

    The paper is a short overview of a recent model of homologous DNA recombination events guided by RNA templates that have been observed in certain species of ciliates. This model uses spatial graphs to describe DNA rearrangements and show how gene recombination can be modeled as topological braiding of the DNA. We show that a graph structure, which we refer to as an assembly graph, containing only 1- and 4-valent rigid vertices can provide a physical representation of the DNA at the time of recombination. With this representation, 4-valent vertices correspond to the alignment of the recombination sites, and we model the actual recombination event as smoothing of these vertices.

  17. Generating graphs for visual analytics through interactive sketching.

    PubMed

    Wong, Pak Chung; Foote, Harlan; Mackey, Patrick; Perrine, Ken; Chin, George

    2006-01-01

    We introduce an interactive graph generator, GreenSketch, designed to facilitate the creation of descriptive graphs required for different visual analytics tasks. The human-centric design approach of GreenSketch enables users to master the creation process without specific training or prior knowledge of graph model theory. The customized user interface encourages users to gain insight into the connection between the compact matrix representation and the topology of a graph layout when they sketch their graphs. Both the human-enforced and machine-generated randomnesses supported by GreenSketch provide the flexibility needed to address the uncertainty factor in many analytical tasks. This paper describes more than two dozen examples that cover a wide variety of graph creations from a single line of nodes to a real-life small-world network that describes a snapshot of telephone connections. While the discussion focuses mainly on the design of GreenSketch, we include a case study that applies the technology in a visual analytics environment and a usability study that evaluates the strengths and weaknesses of our design approach.

  18. A study of structural properties of gene network graphs for mathematical modeling of integrated mosaic gene networks.

    PubMed

    Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A

    2016-12-22

    Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.

  19. Cascades on clique-based graphs

    NASA Astrophysics Data System (ADS)

    Hackett, Adam; Gleeson, James P.

    2013-06-01

    We present an analytical approach to determining the expected cascade size in a broad range of dynamical models on the class of highly clustered random graphs introduced by Gleeson [J. P. Gleeson, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.80.036107 80, 036107 (2009)]. A condition for the existence of global cascades is also derived. Applications of this approach include analyses of percolation, and Watts's model. We show how our techniques can be used to study the effects of in-group bias in cascades on social networks.

  20. Blind Identification of Graph Filters

    NASA Astrophysics Data System (ADS)

    Segarra, Santiago; Mateos, Gonzalo; Marques, Antonio G.; Ribeiro, Alejandro

    2017-03-01

    Network processes are often represented as signals defined on the vertices of a graph. To untangle the latent structure of such signals, one can view them as outputs of linear graph filters modeling underlying network dynamics. This paper deals with the problem of joint identification of a graph filter and its input signal, thus broadening the scope of classical blind deconvolution of temporal and spatial signals to the less-structured graph domain. Given a graph signal $\\mathbf{y}$ modeled as the output of a graph filter, the goal is to recover the vector of filter coefficients $\\mathbf{h}$, and the input signal $\\mathbf{x}$ which is assumed to be sparse. While $\\mathbf{y}$ is a bilinear function of $\\mathbf{x}$ and $\\mathbf{h}$, the filtered graph signal is also a linear combination of the entries of the lifted rank-one, row-sparse matrix $\\mathbf{x} \\mathbf{h}^T$. The blind graph-filter identification problem can thus be tackled via rank and sparsity minimization subject to linear constraints, an inverse problem amenable to convex relaxations offering provable recovery guarantees under simplifying assumptions. Numerical tests using both synthetic and real-world networks illustrate the merits of the proposed algorithms, as well as the benefits of leveraging multiple signals to aid the blind identification task.

  1. Conceptual graphs for semantics and knowledge processing

    SciTech Connect

    Fargues, J.; Landau, M.C.; Dugourd, A.; Catach, L.

    1986-01-01

    This paper discusses the representational and algorithmic power of the conceptual graph model for natural language semantics and knowledge processing. Also described is a Prolog-like resolution method for conceptual graphs, which allows to perform deduction on very large semantic domains. The interpreter developed is similar to a Prolog interpreter in which the terms are any conceptual graphs and in which the unification algorithm is replaced by a specialized algorithm for conceptual graphs.

  2. A Clustering Graph Generator

    SciTech Connect

    Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey

    2015-10-26

    In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps to understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.

  3. Mechatronic modeling of a 750kW fixed-speed wind energy conversion system using the Bond Graph Approach.

    PubMed

    Khaouch, Zakaria; Zekraoui, Mustapha; Bengourram, Jamaa; Kouider, Nourreeddine; Mabrouki, Mustapha

    2016-11-01

    In this paper, we would like to focus on modeling main parts of the wind turbines (blades, gearbox, tower, generator and pitching system) from a mechatronics viewpoint using the Bond-Graph Approach (BGA). Then, these parts are combined together in order to simulate the complete system. Moreover, the real dynamic behavior of the wind turbine is taken into account and with the new model; final load simulation is more realistic offering benefits and reliable system performance. This model can be used to develop control algorithms to reduce fatigue loads and enhance power production. Different simulations are carried-out in order to validate the proposed wind turbine model, using real data provided in the open literature (blade profile and gearbox parameters for a 750 kW wind turbine).

  4. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  5. Approximate Graph Edit Distance in Quadratic Time.

    PubMed

    Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst

    2015-09-14

    Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.

  6. Mild traumatic brain injury: graph-model characterization of brain networks for episodic memory.

    PubMed

    Tsirka, Vasso; Simos, Panagiotis G; Vakis, Antonios; Kanatsouli, Kassiani; Vourkas, Michael; Erimaki, Sofia; Pachou, Ellie; Stam, Cornelis Jan; Micheloyannis, Sifis

    2011-02-01

    Episodic memory is among the cognitive functions that can be affected in the acute phase following mild traumatic brain injury (MTBI). The present study used EEG recordings to evaluate global synchronization and network organization of rhythmic activity during the encoding and recognition phases of an episodic memory task varying in stimulus type (kaleidoscope images, pictures, words, and pseudowords). Synchronization of oscillatory activity was assessed using a linear and nonlinear connectivity estimator and network analyses were performed using algorithms derived from graph theory. Twenty five MTBI patients (tested within days post-injury) and healthy volunteers were closely matched on demographic variables, verbal ability, psychological status variables, as well as on overall task performance. Patients demonstrated sub-optimal network organization, as reflected by changes in graph parameters in the theta and alpha bands during both encoding and recognition. There were no group differences in spectral energy during task performance or on network parameters during a control condition (rest). Evidence of less optimally organized functional networks during memory tasks was more prominent for pictorial than for verbal stimuli.

  7. Fluorescence microscopy image noise reduction using a stochastically-connected random field model

    PubMed Central

    Haider, S. A.; Cameron, A.; Siva, P.; Lui, D.; Shafiee, M. J.; Boroomand, A.; Haider, N.; Wong, A.

    2016-01-01

    Fluorescence microscopy is an essential part of a biologist’s toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise. PMID:26884148

  8. Boosting for multi-graph classification.

    PubMed

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Cai, Zhihua

    2015-03-01

    In this paper, we formulate a novel graph-based learning problem, multi-graph classification (MGC), which aims to learn a classifier from a set of labeled bags each containing a number of graphs inside the bag. A bag is labeled positive, if at least one graph in the bag is positive, and negative otherwise. Such a multi-graph representation can be used for many real-world applications, such as webpage classification, where a webpage can be regarded as a bag with texts and images inside the webpage being represented as graphs. This problem is a generalization of multi-instance learning (MIL) but with vital differences, mainly because instances in MIL share a common feature space whereas no feature is available to represent graphs in a multi-graph bag. To solve the problem, we propose a boosting based multi-graph classification framework (bMGC). Given a set of labeled multi-graph bags, bMGC employs dynamic weight adjustment at both bag- and graph-levels to select one subgraph in each iteration as a weak classifier. In each iteration, bag and graph weights are adjusted such that an incorrectly classified bag will receive a higher weight because its predicted bag label conflicts to the genuine label, whereas an incorrectly classified graph will receive a lower weight value if the graph is in a positive bag (or a higher weight if the graph is in a negative bag). Accordingly, bMGC is able to differentiate graphs in positive and negative bags to derive effective classifiers to form a boosting model for MGC. Experiments and comparisons on real-world multi-graph learning tasks demonstrate the algorithm performance.

  9. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification.

  10. Models construction for acetone-butanol-ethanol fermentations with acetate/butyrate consecutively feeding by graph theory.

    PubMed

    Li, Zhigang; Shi, Zhongping; Li, Xin

    2014-05-01

    Several fermentations with consecutively feeding of acetate/butyrate were conducted in a 7 L fermentor and the results indicated that exogenous acetate/butyrate enhanced solvents productivities by 47.1% and 39.2% respectively, and changed butyrate/acetate ratios greatly. Then extracellular butyrate/acetate ratios were utilized for calculation of acids rates and the results revealed that acetate and butyrate formation pathways were almost blocked by corresponding acids feeding. In addition, models for acetate/butyrate feeding fermentations were constructed by graph theory based on calculation results and relevant reports. Solvents concentrations and butanol/acetone ratios of these fermentations were also calculated and the results of models calculation matched fermentation data accurately which demonstrated that models were constructed in a reasonable way.

  11. Universal spectral statistics in quantum graphs.

    PubMed

    Gnutzmann, Sven; Altland, Alexander

    2004-11-05

    We prove that the spectrum of an individual chaotic quantum graph shows universal spectral correlations, as predicted by random-matrix theory. The stability of these correlations with regard to nonuniversal corrections is analyzed in terms of the linear operator governing the classical dynamics on the graph.

  12. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  13. Superstatistical analysis and modelling of heterogeneous random walks

    NASA Astrophysics Data System (ADS)

    Metzner, Claus; Mark, Christoph; Steinwachs, Julian; Lautscham, Lena; Stadler, Franz; Fabry, Ben

    2015-06-01

    Stochastic time series are ubiquitous in nature. In particular, random walks with time-varying statistical properties are found in many scientific disciplines. Here we present a superstatistical approach to analyse and model such heterogeneous random walks. The time-dependent statistical parameters can be extracted from measured random walk trajectories with a Bayesian method of sequential inference. The distributions and correlations of these parameters reveal subtle features of the random process that are not captured by conventional measures, such as the mean-squared displacement or the step width distribution. We apply our new approach to migration trajectories of tumour cells in two and three dimensions, and demonstrate the superior ability of the superstatistical method to discriminate cell migration strategies in different environments. Finally, we show how the resulting insights can be used to design simple and meaningful models of the underlying random processes.

  14. Superstatistical analysis and modelling of heterogeneous random walks

    PubMed Central

    Metzner, Claus; Mark, Christoph; Steinwachs, Julian; Lautscham, Lena; Stadler, Franz; Fabry, Ben

    2015-01-01

    Stochastic time series are ubiquitous in nature. In particular, random walks with time-varying statistical properties are found in many scientific disciplines. Here we present a superstatistical approach to analyse and model such heterogeneous random walks. The time-dependent statistical parameters can be extracted from measured random walk trajectories with a Bayesian method of sequential inference. The distributions and correlations of these parameters reveal subtle features of the random process that are not captured by conventional measures, such as the mean-squared displacement or the step width distribution. We apply our new approach to migration trajectories of tumour cells in two and three dimensions, and demonstrate the superior ability of the superstatistical method to discriminate cell migration strategies in different environments. Finally, we show how the resulting insights can be used to design simple and meaningful models of the underlying random processes. PMID:26108639

  15. Random exchange models and the distribution of wealth

    NASA Astrophysics Data System (ADS)

    Scalas, Enrico

    2016-12-01

    I am presenting my personal point of view on what is interesting in Econophysics. In particular, I focus on random exchange models for the distribution of wealth in order to illustrate the concept of statistical equilibrium in Economics.

  16. What is the difference between the breakpoint graph and the de Bruijn graph?

    PubMed

    Lin, Yu; Nurk, Sergey; Pevzner, Pavel A

    2014-01-01

    The breakpoint graph and the de Bruijn graph are two key data structures in the studies of genome rearrangements and genome assembly. However, the classical breakpoint graphs are defined on two genomes (represented as sequences of synteny blocks), while the classical de Bruijn graphs are defined on a single genome (represented as DNA strings). Thus, the connection between these two graph models is not explicit. We generalize the notions of both the breakpoint graph and the de Bruijn graph, and make it transparent that the breakpoint graph and the de Bruijn graph are mathematically equivalent. The explicit description of the connection between these important data structures provides a bridge between two previously separated bioinformatics communities studying genome rearrangements and genome assembly.

  17. A Gompertzian model with random effects to cervical cancer growth

    SciTech Connect

    Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati

    2015-05-15

    In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.

  18. A Gompertzian model with random effects to cervical cancer growth

    NASA Astrophysics Data System (ADS)

    Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati

    2015-05-01

    In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.

  19. Bayesian nonparametric centered random effects models with variable selection.

    PubMed

    Yang, Mingan

    2013-03-01

    In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.

  20. Multibody graph transformations and analysis

    PubMed Central

    2013-01-01

    This two-part paper uses graph transformation methods to develop methods for partitioning, aggregating, and constraint embedding for multibody systems. This first part focuses on tree-topology systems and reviews the key notion of spatial kernel operator (SKO) models for such systems. It develops systematic and rigorous techniques for partitioning SKO models in terms of the SKO models of the component subsystems based on the path-induced property of the component subgraphs. It shows that the sparsity structure of key matrix operators and the mass matrix for the multibody system can be described using partitioning transformations. Subsequently, the notions of node contractions and subgraph aggregation and their role in coarsening graphs are discussed. It is shown that the tree property of a graph is preserved after subgraph aggregation if and only if the subgraph satisfies an aggregation condition. These graph theory ideas are used to develop SKO models for the aggregated tree multibody systems. PMID:24288438

  1. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

  2. Hierarchical structure of the logical Internet graph

    NASA Astrophysics Data System (ADS)

    Ge, Zihui; Figueiredo, Daniel R.; Jaiswal, Sharad; Gao, Lixin

    2001-07-01

    The study of the Internet topology has recently received much attention from the research community. In particular, the observation that the network graph has interesting properties, such as power laws, that might be explored in a myriad of ways. Most of the work in characterizing the Internet graph is based on the physical network graph, i.e., the connectivity graph. In this paper we investigate how logical relationships between nodes of the AS graph can be used to gain insight to its structure. We characterize the logical graph using various metrics and identify the presence of power laws in the number of customers that a provider has. Using these logical relationships we define a structural model of the AS graph. The model highlights the hierarchical nature of logical relationships and the preferential connection to larger providers. We also investigate the consistency of this model over time and observe interesting properties of the hierarchical structure.

  3. Preserving Differential Privacy in Degree-Correlation based Graph Generation.

    PubMed

    Wang, Yue; Wu, Xintao

    2013-08-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model.

  4. Preserving Differential Privacy in Degree-Correlation based Graph Generation

    PubMed Central

    Wang, Yue; Wu, Xintao

    2014-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987

  5. Mechanical device for determining the stiffness and the viscous friction coefficient of shock absorber elements modelled by bond graph

    NASA Astrophysics Data System (ADS)

    Ibănescu, R.; Ibănescu, M.

    2016-11-01

    The present paper presents a mechanical device for the assessment of the fundamental parameters of a shock absorber: the spring stiffness and the viscous friction coefficient, without disassembling the absorber. The device produces an oscillatory motion of the shock absorber and can measure its amplitude and angular velocities. The dynamic model of the system, consisting of the mechanical device and the shock absorber, is performed by using the bond- graph method. Based on this model, the motion equations are obtained, which by integration lead to the motion law. The two previously mentioned parameters are determined by using this law and the measured values of two amplitudes and of their corresponding angular velocities. They result as solutions of a system of two non-linear algebraic equations.

  6. Probabilistic solution of random SI-type epidemiological models using the Random Variable Transformation technique

    NASA Astrophysics Data System (ADS)

    Casabán, M.-C.; Cortés, J.-C.; Romero, J.-V.; Roselló, M.-D.

    2015-07-01

    This paper presents a full probabilistic description of the solution of random SI-type epidemiological models which are based on nonlinear differential equations. This description consists of determining: the first probability density function of the solution in terms of the density functions of the diffusion coefficient and the initial condition, which are assumed to be independent random variables; the expectation and variance functions of the solution as well as confidence intervals and, finally, the distribution of time until a given proportion of susceptibles remains in the population. The obtained formulas are general since they are valid regardless the probability distributions assigned to the random inputs. We also present a pair of illustrative examples including in one of them the application of the theoretical results to model the diffusion of a technology using real data.

  7. A random spatial network model based on elementary postulates

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1989-01-01

    In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. -from Authors

  8. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  9. Positive and Unlabeled Multi-Graph Learning.

    PubMed

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Zhang, Chengqi; Wu, Xindong

    2016-03-23

    In this paper, we advance graph classification to handle multi-graph learning for complicated objects, where each object is represented as a bag of graphs and the label is only available to each bag but not individual graphs. In addition, when training classifiers, users are only given a handful of positive bags and many unlabeled bags, and the learning objective is to train models to classify previously unseen graph bags with maximum accuracy. To achieve the goal, we propose a positive and unlabeled multi-graph learning (puMGL) framework to first select informative subgraphs to convert graphs into a feature space. To utilize unlabeled bags for learning, puMGL assigns a confidence weight to each bag and dynamically adjusts its weight value to select "reliable negative bags." A number of representative graphs, selected from positive bags and identified reliable negative graph bags, form a "margin graph pool" which serves as the base for deriving subgraph patterns, training graph classifiers, and further updating the bag weight values. A closed-loop iterative process helps discover optimal subgraphs from positive and unlabeled graph bags for learning. Experimental comparisons demonstrate the performance of puMGL for classifying real-world complicated objects.

  10. Using convex quadratic programming to model random media with Gaussian random fields

    SciTech Connect

    Quintanilla, John A.; Jones, W. Max

    2007-04-15

    Excursion sets of Gaussian random fields (GRFs) have been frequently used in the literature to model two-phase random media with measurable phase autocorrelation functions. The goal of successful modeling is finding the optimal field autocorrelation function that best approximates the prescribed phase autocorrelation function. In this paper, we present a technique which uses convex quadratic programming to find the best admissible field autocorrelation function under a prescribed discretization. Unlike previous methods, this technique efficiently optimizes over all admissible field autocorrelation functions, instead of optimizing only over a predetermined parametrized family. The results from using this technique indicate that the GRF model is significantly more versatile than observed in previous studies. An application to modeling a base-catalyzed tetraethoxysilane aerogel system given small-angle neutron scattering data is also presented.

  11. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  12. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  13. Aligning graphs and finding substructures by a cavity approach

    NASA Astrophysics Data System (ADS)

    Bradde, S.; Braunstein, A.; Mahmoudi, H.; Tria, F.; Weigt, M.; Zecchina, R.

    2010-02-01

    We introduce a new distributed algorithm for aligning graphs or finding substructures within a given graph. It is based on the cavity method and is used to study the maximum-clique and the graph-alignment problems in random graphs. The algorithm allows to analyze large graphs and may find applications in fields such as computational biology. As a proof of concept we use our algorithm to align the similarity graphs of two interacting protein families involved in bacterial signal transduction, and to predict actually interacting protein partners between these families.

  14. The melting phenomenon in random-walk model of DNA

    SciTech Connect

    Hayrapetyan, G. N.; Mamasakhlisov, E. Sh.; Papoyan, Vl. V.; Poghosyan, S. S.

    2012-10-15

    The melting phenomenon in a double-stranded homopolypeptide is considered. The relative distance between the corresponding monomers of two polymer chains is modeled by the two-dimensional random walk on the square lattice. Returns of the random walk to the origin describe the formation of hydrogen bonds between complementary units. To take into account the two competing interactions of monomers inside the chains, we obtain a completely denatured state at finite temperature T{sub c}.

  15. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  16. Graph anomalies in cyber communications

    SciTech Connect

    Vander Wiel, Scott A; Storlie, Curtis B; Sandine, Gary; Hagberg, Aric A; Fisk, Michael

    2011-01-11

    Enterprises monitor cyber traffic for viruses, intruders and stolen information. Detection methods look for known signatures of malicious traffic or search for anomalies with respect to a nominal reference model. Traditional anomaly detection focuses on aggregate traffic at central nodes or on user-level monitoring. More recently, however, traffic is being viewed more holistically as a dynamic communication graph. Attention to the graph nature of the traffic has expanded the types of anomalies that are being sought. We give an overview of several cyber data streams collected at Los Alamos National Laboratory and discuss current work in modeling the graph dynamics of traffic over the network. We consider global properties and local properties within the communication graph. A method for monitoring relative entropy on multiple correlated properties is discussed in detail.

  17. Interacting particle systems on graphs

    NASA Astrophysics Data System (ADS)

    Sood, Vishal

    In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations

  18. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.

    1994-01-01

    A methodology for simulation of molecular mixing and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and non-reacting shear layer present in the facility given basic assumptions about turbulence properties.

  19. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  20. Multi-channel MRI segmentation with graph cuts using spectral gradient and multidimensional Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Lecoeur, Jérémy; Ferré, Jean-Christophe; Collins, D. Louis; Morrisey, Sean P.; Barillot, Christian

    2009-02-01

    A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence. Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI. Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification). As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%. We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.

  1. The Knowledge or Random Guessing Model for Matching Tests.

    ERIC Educational Resources Information Center

    van der Ven, A. H. G. S.; Gremmen, F. M.

    1992-01-01

    A statistical test of the knowledge or random guessing model is presented. A version of the model is introduced in which it is assumed that alternatives can be ordered according to a Guttman scale. Three examples illustrate its application to data from a total of 590 college students. (Author/SLD)

  2. Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search

    PubMed Central

    2014-01-01

    Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell

  3. QSAR as a random event: modeling of nanoparticles uptake in PaCa2 cancer cells.

    PubMed

    Toropov, Andrey A; Toropova, Alla P; Puzyn, Tomasz; Benfenati, Emilio; Gini, Giuseppina; Leszczynska, Danuta; Leszczynski, Jerzy

    2013-06-01

    Quantitative structure-property/activity relationships (QSPRs/QSARs) are a tool to predict various endpoints for various substances. The "classic" QSPR/QSAR analysis is based on the representation of the molecular structure by the molecular graph. However, simplified molecular input-line entry system (SMILES) gradually becomes most popular representation of the molecular structure in the databases available on the Internet. Under such circumstances, the development of molecular descriptors calculated directly from SMILES becomes attractive alternative to "classic" descriptors. The CORAL software (http://www.insilico.eu/coral) is provider of SMILES-based optimal molecular descriptors which are aimed to correlate with various endpoints. We analyzed data set on nanoparticles uptake in PaCa2 pancreatic cancer cells. The data set includes 109 nanoparticles with the same core but different surface modifiers (small organic molecules). The concept of a QSAR as a random event is suggested in opposition to "classic" QSARs which are based on the only one distribution of available data into the training and the validation sets. In other words, five random splits into the "visible" training set and the "invisible" validation set were examined. The SMILES-based optimal descriptors (obtained by the Monte Carlo technique) for these splits are calculated with the CORAL software. The statistical quality of all these models is good.

  4. Magnetization-driven random-field Ising model at T=0

    NASA Astrophysics Data System (ADS)

    Illa, Xavier; Rosinberg, Martin-Luc; Shukla, Prabodh; Vives, Eduard

    2006-12-01

    We study the hysteretic evolution of the random field Ising model at T=0 when the magnetization M is controlled externally and the magnetic field H becomes the output variable. The dynamics is a simple modification of the single-spin-flip dynamics used in the H -driven situation and consists in flipping successively the spins with the largest local field. This allows one to perform a detailed comparison between the microscopic trajectories followed by the system with the two protocols. Simulations are performed on random graphs with connectivity z=4 (Bethe lattice) and on the three-dimensional cubic lattice. The same internal energy U(M) is found with the two protocols when there is no macroscopic avalanche and it does not depend on whether the microscopic states are stable or not. On the Bethe lattice, the energy inside the macroscopic avalanche also coincides with the one that is computed analytically with the H -driven algorithm along the unstable branch of the hysteresis loop. The output field, defined here as ΔU/ΔM , exhibits very large fluctuations with the magnetization and is not self-averaging. The relation to the experimental situation is discussed.

  5. Exactly solvable interacting two-particle quantum graphs

    NASA Astrophysics Data System (ADS)

    Bolte, Jens; Garforth, George

    2017-03-01

    We construct models of exactly solvable two-particle quantum graphs with certain non-local two-particle interactions, establishing appropriate boundary conditions via suitable self-adjoint realisations of the two-particle Laplacian. Showing compatibility with the Bethe ansatz method, we calculate quantisation conditions in the form of secular equations from which the spectra can be deduced. We compare spectral statistics of some examples to well known results in random matrix theory, analysing the chaotic properties of their classical counterparts.

  6. [Some exact results for random walk models with applications].

    PubMed

    Schwarz, W

    1989-01-01

    This article presents a random walk model that can be analyzed without recourse to Wald's (1947) approximation, which neglects the excess over the absorbing barriers. Hence, the model yields exact predictions for the absorption probabilities and all mean conditional absorption times. We derive these predictions in some detail and fit them to the extensive data of an identification experiment published by Green et al. (1983). The fit of the model seems satisfactory. The relationship of the model to existing classes of random walk models (SPRT and SSR; see Luce, 1986) is discussed; for certain combinations of its parameters, the model belongs either to the SPRT or to the SSR class, or to both. We stress the theoretical significance of the knowledge of exact results for the evaluation of Wald's approximation and general properties of the several models proposed derived from this approximation.

  7. On a programming language for graph algorithms

    NASA Technical Reports Server (NTRS)

    Rheinboldt, W. C.; Basili, V. R.; Mesztenyi, C. K.

    1971-01-01

    An algorithmic language, GRAAL, is presented for describing and implementing graph algorithms of the type primarily arising in applications. The language is based on a set algebraic model of graph theory which defines the graph structure in terms of morphisms between certain set algebraic structures over the node set and arc set. GRAAL is modular in the sense that the user specifies which of these mappings are available with any graph. This allows flexibility in the selection of the storage representation for different graph structures. In line with its set theoretic foundation, the language introduces sets as a basic data type and provides for the efficient execution of all set and graph operators. At present, GRAAL is defined as an extension of ALGOL 60 (revised) and its formal description is given as a supplement to the syntactic and semantic definition of ALGOL. Several typical graph algorithms are written in GRAAL to illustrate various features of the language and to show its applicability.

  8. Modelling the random effects covariance matrix in longitudinal data.

    PubMed

    Daniels, Michael J; Zhao, Yan D

    2003-05-30

    A common class of models for longitudinal data are random effects (mixed) models. In these models, the random effects covariance matrix is typically assumed constant across subject. However, in many situations this matrix may differ by measured covariates. In this paper, we propose an approach to model the random effects covariance matrix by using a special Cholesky decomposition of the matrix. In particular, we will allow the parameters that result from this decomposition to depend on subject-specific covariates and also explore ways to parsimoniously model these parameters. An advantage of this parameterization is that there is no concern about the positive definiteness of the resulting estimator of the covariance matrix. In addition, the parameters resulting from this decomposition have a sensible interpretation. We propose fully Bayesian modelling for which a simple Gibbs sampler can be implemented to sample from the posterior distribution of the parameters. We illustrate these models on data from depression studies and examine the impact of heterogeneity in the covariance matrix on estimation of both fixed and random effects.

  9. Using a CBL Unit, a Temperature Sensor, and a Graphing Calculator to Model the Kinetics of Consecutive First-Order Reactions as Safe In-Class Demonstrations

    ERIC Educational Resources Information Center

    Moore-Russo, Deborah A.; Cortes-Figueroa, Jose E.; Schuman, Michael J.

    2006-01-01

    The use of Calculator-Based Laboratory (CBL) technology, the graphing calculator, and the cooling and heating of water to model the behavior of consecutive first-order reactions is presented, where B is the reactant, I is the intermediate, and P is the product for an in-class demonstration. The activity demonstrates the spontaneous and consecutive…

  10. Real line strength distributions for random band models

    NASA Technical Reports Server (NTRS)

    Kim, S. J.; Caldwell, J.

    1983-01-01

    An improved random band-model method, which makes allowance for the real line-strength distribution, is proposed. The model is shown to be useful for low-resolution, infrared observational data of the outer solar system. The method can be used as easily as conventional random band calculations. In the illustrative examples cited here, the variation of line width with J, the rotational quantum number, is small. Other effects which can, in principle, cause the model to deviate from laboratory observations are discussed. These include the assumption that line positions are random, ignoring the effects of the Lorentz wings of lines immediately outside the specific interval for which the mean transmission is calculated, and ignoring the effects of instrumental slit functions.

  11. Graphing Polar Curves

    ERIC Educational Resources Information Center

    Lawes, Jonathan F.

    2013-01-01

    Graphing polar curves typically involves a combination of three traditional techniques, all of which can be time-consuming and tedious. However, an alternative method--graphing the polar function on a rectangular plane--simplifies graphing, increases student understanding of the polar coordinate system, and reinforces graphing techniques learned…

  12. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  13. Using Random Forest Models to Predict Organizational Violence

    NASA Technical Reports Server (NTRS)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  14. Random-anisotropy Blume-Emery-Griffiths model

    NASA Technical Reports Server (NTRS)

    Maritan, Amos; Cieplak, Marek; Swift, Michael R.; Toigo, Flavio; Banavar, Jayanth R.

    1992-01-01

    The results are described of studies of a random-anisotropy Blume-Emery-Griffiths spin-1 Ising model using mean-field theory, transfer-matrix calculations, and position-space renormalization-group calculations. The interplay between the quenched randomness of the anisotropy and the annealed disorder introduced by the spin-1 model leads to a rich phase diagram with a variety of phase transitions and reentrant behavior. The results may be relevant to the study of the phase separation of He-3 - He-4 mixtures in porous media in the vicinity of the superfluid transition.

  15. Geometric critical exponent inequalities for general random cluster models

    NASA Astrophysics Data System (ADS)

    Tasaki, Hal

    1987-11-01

    A set of new critical exponent inequalities, d(1 -1 / δ)≥2 - η, dv(1 - 1/ δ)≥ γ, and dμ> 1, is proved for a general class of random cluster models, which includes (independent or dependent) percolations, lattice animals (with any interactions), and various stochastic cluster growth models. The inequalities imply that the critical phenomena in the models are inevitably not mean-field-like in the dimensions one, two, and three.

  16. Discrete Modeling of the Worm Spread with Random Scanning

    NASA Astrophysics Data System (ADS)

    Uchida, Masato

    In this paper, we derive a set of discrete time difference equations that models the spreading process of computer worms such as Code-Red and Slammer, which uses a common strategy called “random scanning” to spread through the Internet. We show that the derived set of discrete time difference equations has an exact relationship with the Kermack and McKendrick susceptible-infectious-removed (SIR) model, which is known as a standard continuous time model for worm spreading.

  17. Bead-rod-spring models in random flows

    NASA Astrophysics Data System (ADS)

    Plan, Emmanuel Lance Christopher Medillo, VI; Ali, Aamir; Vincenzi, Dario

    2016-08-01

    Bead-rod-spring models are the foundation of the kinetic theory of polymer solutions. We derive the diffusion equation for the probability density function of the configuration of a general bead-rod-spring model in short-correlated Gaussian random flows. Under isotropic conditions, we solve this equation analytically for the elastic rhombus model introduced by Curtiss, Bird, and Hassager [Adv. Chem. Phys. 35, 31 (1976)].

  18. Study of Double-Weighted Graph Model and Optimal Path Planning for Tourist Scenic Area Oriented Intelligent Tour Guide

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Long, Y.; Wi, X. L.

    2014-04-01

    When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.

  19. GPD: a graph pattern diffusion kernel for accurate graph classification with applications in cheminformatics.

    PubMed

    Smalter, Aaron; Huan, Jun Luke; Jia, Yi; Lushington, Gerald

    2010-01-01

    Graph data mining is an active research area. Graphs are general modeling tools to organize information from heterogeneous sources and have been applied in many scientific, engineering, and business fields. With the fast accumulation of graph data, building highly accurate predictive models for graph data emerges as a new challenge that has not been fully explored in the data mining community. In this paper, we demonstrate a novel technique called graph pattern diffusion (GPD) kernel. Our idea is to leverage existing frequent pattern discovery methods and to explore the application of kernel classifier (e.g., support vector machine) in building highly accurate graph classification. In our method, we first identify all frequent patterns from a graph database. We then map subgraphs to graphs in the graph database and use a process we call "pattern diffusion" to label nodes in the graphs. Finally, we designed a graph alignment algorithm to compute the inner product of two graphs. We have tested our algorithm using a number of chemical structure data. The experimental results demonstrate that our method is significantly better than competing methods such as those kernel functions based on paths, cycles, and subgraphs.

  20. Quantum walk coherences on a dynamical percolation graph.

    PubMed

    Elster, Fabian; Barkhofen, Sonja; Nitsche, Thomas; Novotný, Jaroslav; Gábris, Aurél; Jex, Igor; Silberhorn, Christine

    2015-08-27

    Coherent evolution governs the behaviour of all quantum systems, but in nature it is often subjected to influence of a classical environment. For analysing quantum transport phenomena quantum walks emerge as suitable model systems. In particular, quantum walks on percolation structures constitute an attractive platform for studying open system dynamics of random media. Here, we present an implementation of quantum walks differing from the previous experiments by achieving dynamical control of the underlying graph structure. We demonstrate the evolution of an optical time-multiplexed quantum walk over six double steps, revealing the intricate interplay between the internal and external degrees of freedom. The observation of clear non-Markovian signatures in the coin space testifies the high coherence of the implementation and the extraordinary degree of control of all system parameters. Our work is the proof-of-principle experiment of a quantum walk on a dynamical percolation graph, paving the way towards complex simulation of quantum transport in random media.

  1. Dynamical properties of random-field Ising model.

    PubMed

    Sinha, Suman; Mandal, Pradipta Kumar

    2013-02-01

    Extensive Monte Carlo simulations are performed on a two-dimensional random field Ising model. The purpose of the present work is to study the disorder-induced changes in the properties of disordered spin systems. The time evolution of the domain growth, the order parameter, and the spin-spin correlation functions are studied in the nonequilibrium regime. The dynamical evolution of the order parameter and the domain growth shows a power law scaling with disorder-dependent exponents. It is observed that for weak random fields, the two-dimensional random field Ising model possesses long-range order. Except for weak disorder, exchange interaction never wins over pinning interaction to establish long-range order in the system.

  2. Initial Status in Growth Curve Modeling for Randomized Trials

    PubMed Central

    Chou, Chih-Ping; Chi, Felicia; Weisner, Constance; Pentz, MaryAnn; Hser, Yih-Ing

    2010-01-01

    The growth curve modeling (GCM) technique has been widely adopted in longitudinal studies to investigate progression over time. The simplest growth profile involves two growth factors, initial status (intercept) and growth trajectory (slope). Conventionally, all repeated measures of outcome are included as components of the growth profile, and the first measure is used to reflect the initial status. Selection of the initial status, however, can greatly influence study findings, especially for randomized trials. In this article, we propose an alternative GCM approach involving only post-intervention measures in the growth profile and treating the first wave after intervention as the initial status. We discuss and empirically illustrate how choices of initial status may influence study conclusions in addressing research questions in randomized trials using two longitudinal studies. Data from two randomized trials are used to illustrate that the alternative GCM approach proposed in this article offers better model fitting and more meaningful results. PMID:21572585

  3. Tests of random density models of terrestrial planets

    SciTech Connect

    Kaula, W.M. ); Asimow, P.D. )

    1991-05-01

    Random density models are analyzed to determine the low degree harmonics of the gravity field of a planet, and therefrom two properties: an axiality P{sub l}, the percent of the degree variance in the zonal term referred to an axis through the maximum for degree l; and an angularity E{sub ln}, the angle between the maxima for two degrees l, n. The random density distributions give solutions reasonably consistent with the axialities and angularities for the low degrees, l < 5, of Earth, Venus, and Moon, but not for Mars, which has improbably large axialities and small angularities. Hence the random density model is an unreliable predictor for the non-hydrostatic second-degree gravity of Mars, and thus for the moment-of-inertia, which is more plausibly close to 0.365MR{sup 2}.

  4. Constructing and sampling graphs with a given joint degree distribution.

    SciTech Connect

    Pinar, Ali; Stanton, Isabelle

    2010-09-01

    One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.

  5. Statistical mechanics of the spherical hierarchical model with random fields

    NASA Astrophysics Data System (ADS)

    Metz, Fernando L.; Rocchi, Jacopo; Urbani, Pierfrancesco

    2014-09-01

    We study analytically the equilibrium properties of the spherical hierarchical model in the presence of random fields. The expression for the critical line separating a paramagnetic from a ferromagnetic phase is derived. The critical exponents characterising this phase transition are computed analytically and compared with those of the corresponding D-dimensional short-range model, leading to conclude that the usual mapping between one dimensional long-range models and D-dimensional short-range models holds exactly for this system, in contrast to models with Ising spins. Moreover, the critical exponents of the pure model and those of the random field model satisfy a relationship that mimics the dimensional reduction rule. The absence of a spin-glass phase is strongly supported by the local stability analysis of the replica symmetric saddle-point as well as by an independent computation of the free-energy using a renormalization-like approach. This latter result enlarges the class of random field models for which the spin-glass phase has been recently ruled out.

  6. Random-effects models for meta-analytic structural equation modeling: review, issues, and illustrations.

    PubMed

    Cheung, Mike W-L; Cheung, Shu Fai

    2016-06-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM. Random-effects models are well known in conventional meta-analysis but are less studied in MASEM. The primary objective of this paper was to address issues related to random-effects models in MASEM. Specifically, we compared two different random-effects models in MASEM-correlation-based MASEM and parameter-based MASEM-and explored their strengths and limitations. Two examples were used to illustrate the similarities and differences between these models. We offered some practical guidelines for choosing between these two models. Future directions for research on random-effects models in MASEM were also discussed. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Network meta-analysis, electrical networks and graph theory.

    PubMed

    Rücker, Gerta

    2012-12-01

    Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  9. Asthma Self-Management Model: Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo

    2016-01-01

    Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…

  10. Non-parametric Bayesian graph models reveal community structure in resting state fMRI.

    PubMed

    Andersen, Kasper Winther; Madsen, Kristoffer H; Siebner, Hartwig Roman; Schmidt, Mikkel N; Mørup, Morten; Hansen, Lars Kai

    2014-10-15

    Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite Diagonal Model (IDM). The models define probabilities of generating links within and between clusters and the difference between the models lies in the restrictions they impose upon the between-cluster link probabilities. IRM is the most flexible model with no restrictions on the probabilities of links between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background noise probability. These probabilistic models are compared against three other approaches for node clustering, namely Infomap, Louvain modularity, and hierarchical clustering. Using 3 different datasets comprising healthy volunteers' rs-fMRI we found that the BCD model was in general the most predictive and reproducible model. This suggests that rs-fMRI data exhibits community structure and furthermore points to the significance of modeling heterogeneous between-cluster link probabilities.

  11. Random matrices as models for the statistics of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  12. Comparing pedigree graphs.

    PubMed

    Kirkpatrick, Bonnie; Reshef, Yakir; Finucane, Hilary; Jiang, Haitao; Zhu, Binhai; Karp, Richard M

    2012-09-01

    Pedigree graphs, or family trees, are typically constructed by an expensive process of examining genealogical records to determine which pairs of individuals are parent and child. New methods to automate this process take as input genetic data from a set of extant individuals and reconstruct ancestral individuals. There is a great need to evaluate the quality of these methods by comparing the estimated pedigree to the true pedigree. In this article, we consider two main pedigree comparison problems. The first is the pedigree isomorphism problem, for which we present a linear-time algorithm for leaf-labeled pedigrees. The second is the pedigree edit distance problem, for which we present (1) several algorithms that are fast and exact in various special cases, and (2) a general, randomized heuristic algorithm. In the negative direction, we first prove that the pedigree isomorphism problem is as hard as the general graph isomorphism problem, and that the sub-pedigree isomorphism problem is NP-hard. We then show that the pedigree edit distance problem is APX-hard in general and NP-hard on leaf-labeled pedigrees. We use simulated pedigrees to compare our edit-distance algorithms to each other as well as to a branch-and-bound algorithm that always finds an optimal solution.

  13. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  14. Natural/random protein classification models based on star network topological indices.

    PubMed

    Munteanu, Cristian Robert; González-Díaz, Humberto; Borges, Fernanda; de Magalhães, Alexandre Lopes

    2008-10-21

    The development of the complex network graphs permits us to describe any real system such as social, neural, computer or genetic networks by transforming real properties in topological indices (TIs). This work uses Randic's star networks in order to convert the protein primary structure data in specific topological indices that are used to construct a natural/random protein classification model. The set of natural proteins contains 1046 protein chains selected from the pre-compiled CulledPDB list from PISCES Dunbrack's Web Lab. This set is characterized by a protein homology of 20%, a structure resolution of 1.6A and R-factor lower than 25%. The set of random amino acid chains contains 1046 sequences which were generated by Python script according to the same type of residues and average chain length found in the natural set. A new Sequence to Star Networks (S2SNet) wxPython GUI application (with a Graphviz graphics back-end) was designed by our group in order to transform any character sequence in the following star network topological indices: Shannon entropy of Markov matrices, trace of connectivity matrices, Harary number, Wiener index, Gutman index, Schultz index, Moreau-Broto indices, Balaban distance connectivity index, Kier-Hall connectivity indices and Randic connectivity index. The model was constructed with the General Discriminant Analysis methods from STATISTICA package and gave training/predicting set accuracies of 90.77% for the forward stepwise model type. In conclusion, this study extends for the first time the classical TIs to protein star network TIs by proposing a model that can predict if a protein/fragment of protein is natural or random using only the amino acid sequence data. This classification can be used in the studies of the protein functions by changing some fragments with random amino acid sequences or to detect the fake amino acid sequences or the errors in proteins. These results promote the use of the S2SNet application not only for

  15. Cascading failures in bi-partite graphs: model for systemic risk propagation.

    PubMed

    Huang, Xuqing; Vodenska, Irena; Havlin, Shlomo; Stanley, H Eugene

    2013-01-01

    As economic entities become increasingly interconnected, a shock in a financial network can provoke significant cascading failures throughout the system. To study the systemic risk of financial systems, we create a bi-partite banking network model composed of banks and bank assets and propose a cascading failure model to describe the risk propagation process during crises. We empirically test the model with 2007 US commercial banks balance sheet data and compare the model prediction of the failed banks with the real failed banks after 2007. We find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation. The results suggest that this model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the failure of over 350 US commercial banks during 2008-2011.

  16. Cascading Failures in Bi-partite Graphs: Model for Systemic Risk Propagation

    PubMed Central

    Huang, Xuqing; Vodenska, Irena; Havlin, Shlomo; Stanley, H. Eugene

    2013-01-01

    As economic entities become increasingly interconnected, a shock in a financial network can provoke significant cascading failures throughout the system. To study the systemic risk of financial systems, we create a bi-partite banking network model composed of banks and bank assets and propose a cascading failure model to describe the risk propagation process during crises. We empirically test the model with 2007 US commercial banks balance sheet data and compare the model prediction of the failed banks with the real failed banks after 2007. We find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation. The results suggest that this model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the failure of over 350 US commercial banks during 2008–2011. PMID:23386974

  17. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  18. Numerical and Analytic Studies of Random-Walk Models.

    NASA Astrophysics Data System (ADS)

    Li, Bin

    We begin by recapitulating the universality approach to problems associated with critical systems, and discussing the role that random-walk models play in the study of phase transitions and critical phenomena. As our first numerical simulation project, we perform high-precision Monte Carlo calculations for the exponents of the intersection probability of pairs and triplets of ordinary random walks in 2 dimensions, in order to test the conformal-invariance theory predictions. Our numerical results strongly support the theory. Our second numerical project aims to test the hyperscaling relation dnu = 2 Delta_4-gamma for self-avoiding walks in 2 and 3 dimensions. We apply the pivot method to generate pairs of self-avoiding walks, and then for each pair, using the Karp-Luby algorithm, perform an inner -loop Monte Carlo calculation of the number of different translates of one walk that makes at least one intersection with the other. Applying a least-squares fit to estimate the exponents, we have obtained strong numerical evidence that the hyperscaling relation is true in 3 dimensions. Our great amount of data for walks of unprecedented length(up to 80000 steps), yield a updated value for the end-to-end distance and radius of gyration exponent nu = 0.588 +/- 0.001 (95% confidence limit), which comes out in good agreement with the renormalization -group prediction. In an analytic study of random-walk models, we introduce multi-colored random-walk models and generalize the Symanzik and B.F.S. random-walk representations to the multi-colored case. We prove that the zero-component lambdavarphi^2psi^2 theory can be represented by a two-color mutually -repelling random-walk model, and it becomes the mutually -avoiding walk model in the limit lambda to infty. However, our main concern and major break-through lies in the study of the two-point correlation function for the lambda varphi^2psi^2 theory with N > 0 components. By representing it as a two-color random-walk expansion

  19. Random shearing direction models for isotropic turbulent diffusion

    NASA Astrophysics Data System (ADS)

    Majda, Andrew J.

    1994-06-01

    Recently, a rigorous renormalization theory for various scalar statistics has been developed for special modes of random advection diffusion involving random shear layer velocity fields with long-range spatiotemporal correlations. New random shearing direction models for isotropic turbulent diffusion are introduced here. In these models the velocity field has the spatial second-order statistics of an arbitrary prescribed stationary incompressible isotropic random field including long-range spatial correlations with infrared divergence, but the temporal correlations have finite range. The explicit theory of renormalization for the mean and second-order statistics is developed here. With ɛ the spectral parameter, for -∞<ɛ<4 and measuring the strength of the infrared divergence of the spatial spectrum, the scalar mean statistics rigorously exhibit a phase transition from mean-field behavior for ɛ<2 to anomalous behavior for ɛ with 2<ɛ<4 as conjectured earlier by Avellaneda and the author. The universal inertial range renormalization for the second-order scalar statistics exhibits a phase transition from a covariance with a Gaussian functional form for ɛ with ɛ<2 to an explicit family with a non-Gaussian covariance for ɛ with 2<ɛ<4. These non-Gaussian distributions have tails that are broader than Gaussian as ɛ varies with 2<ɛ<4 and behave for large values like exp(- C c | x|4-ɛ), with C c an explicit constant. Also, here the attractive general principle is formulated and proved that every steady, stationary, zero-mean, isotropic, incompressible Gaussian random velocity field is well approximated by a suitable superposition of random shear layers.

  20. Computing Information Value from RDF Graph Properties

    SciTech Connect

    al-Saffar, Sinan; Heileman, Gregory

    2010-11-08

    Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We compute information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.

  1. A generalized model via random walks for information filtering

    NASA Astrophysics Data System (ADS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  2. Missing not at random models for latent growth curve analyses.

    PubMed

    Enders, Craig K

    2011-03-01

    The past decade has seen a noticeable shift in missing data handling techniques that assume a missing at random (MAR) mechanism, where the propensity for missing data on an outcome is related to other analysis variables. Although MAR is often reasonable, there are situations where this assumption is unlikely to hold, leading to biased parameter estimates. One such example is a longitudinal study of substance use where participants with the highest frequency of use also have the highest likelihood of attrition, even after controlling for other correlates of missingness. There is a large body of literature on missing not at random (MNAR) analysis models for longitudinal data, particularly in the field of biostatistics. Because these methods allow for a relationship between the outcome variable and the propensity for missing data, they require a weaker assumption about the missing data mechanism. This article describes 2 classic MNAR modeling approaches for longitudinal data: the selection model and the pattern mixture model. To date, these models have been slow to migrate to the social sciences, in part because they required complicated custom computer programs. These models are now quite easy to estimate in popular structural equation modeling programs, particularly Mplus. The purpose of this article is to describe these MNAR modeling frameworks and to illustrate their application on a real data set. Despite their potential advantages, MNAR-based analyses are not without problems and also rely on untestable assumptions. This article offers practical advice for implementing and choosing among different longitudinal models.

  3. Image synthesis with graph cuts: a fast model proposal mechanism in probabilistic inversion

    NASA Astrophysics Data System (ADS)

    Zahner, Tobias; Lochbühler, Tobias; Mariethoz, Grégoire; Linde, Niklas

    2016-02-01

    Geophysical inversion should ideally produce geologically realistic subsurface models that explain the available data. Multiple-point statistics is a geostatistical approach to construct subsurface models that are consistent with site-specific data, but also display the same type of patterns as those found in a training image. The training image can be seen as a conceptual model of the subsurface and is used as a non-parametric model of spatial variability. Inversion based on multiple-point statistics is challenging due to high nonlinearity and time-consuming geostatistical resimulation steps that are needed to create new model proposals. We propose an entirely new model proposal mechanism for geophysical inversion that is inspired by texture synthesis in computer vision. Instead of resimulating pixels based on higher-order patterns in the training image, we identify a suitable patch of the training image that replace a corresponding patch in the current model without breaking the patterns found in the training image, that is, remaining consistent with the given prior. We consider three cross-hole ground-penetrating radar examples in which the new model proposal mechanism is employed within an extended Metropolis Markov chain Monte Carlo (MCMC) inversion. The model proposal step is about 40 times faster than state-of-the-art multiple-point statistics resimulation techniques, the number of necessary MCMC steps is lower and the quality of the final model realizations is of similar quality. The model proposal mechanism is presently limited to 2-D fields, but the method is general and can be applied to a wide range of subsurface settings and geophysical data types.

  4. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  5. Relating Cortical Atrophy in Temporal Lobe Epilepsy with Graph Diffusion-Based Network Models

    PubMed Central

    Abdelnour, Farras; Mueller, Susanne; Raj, Ashish

    2015-01-01

    Mesial temporal lobe epilepsy (TLE) is characterized by stereotyped origination and spread pattern of epileptogenic activity, which is reflected in stereotyped topographic distribution of neuronal atrophy on magnetic resonance imaging (MRI). Both epileptogenic activity and atrophy spread appear to follow white matter connections. We model the networked spread of activity and atrophy in TLE from first principles via two simple first order network diffusion models. Atrophy distribution is modeled as a simple consequence of the propagation of epileptogenic activity in one model, and as a progressive degenerative process in the other. We show that the network models closely reproduce the regional volumetric gray matter atrophy distribution of two epilepsy cohorts: 29 TLE subjects with medial temporal sclerosis (TLE-MTS), and 50 TLE subjects with normal appearance on MRI (TLE-no). Statistical validation at the group level suggests high correlation with measured atrophy (R = 0.586 for TLE-MTS, R = 0.283 for TLE-no). We conclude that atrophy spread model out-performs the hyperactivity spread model. These results pave the way for future clinical application of the proposed model on individual patients, including estimating future spread of atrophy, identification of seizure onset zones and surgical planning. PMID:26513579

  6. Inference of random walk models to describe leukocyte migration

    NASA Astrophysics Data System (ADS)

    Jones, Phoebe J. M.; Sim, Aaron; Taylor, Harriet B.; Bugeon, Laurence; Dallman, Magaret J.; Pereira, Bernard; Stumpf, Michael P. H.; Liepe, Juliane

    2015-12-01

    While the majority of cells in an organism are static and remain relatively immobile in their tissue, migrating cells occur commonly during developmental processes and are crucial for a functioning immune response. The mode of migration has been described in terms of various types of random walks. To understand the details of the migratory behaviour we rely on mathematical models and their calibration to experimental data. Here we propose an approximate Bayesian inference scheme to calibrate a class of random walk models characterized by a specific, parametric particle re-orientation mechanism to observed trajectory data. We elaborate the concept of transition matrices (TMs) to detect random walk patterns and determine a statistic to quantify these TM to make them applicable for inference schemes. We apply the developed pipeline to in vivo trajectory data of macrophages and neutrophils, extracted from zebrafish that had undergone tail transection. We find that macrophage and neutrophils exhibit very distinct biased persistent random walk patterns, where the strengths of the persistence and bias are spatio-temporally regulated. Furthermore, the movement of macrophages is far less persistent than that of neutrophils in response to wounding.

  7. Comparing brain networks of different size and connectivity density using graph theory.

    PubMed

    van Wijk, Bernadette C M; Stam, Cornelis J; Daffertshofer, Andreas

    2010-10-28

    Graph theory is a valuable framework to study the organization of functional and anatomical connections in the brain. Its use for comparing network topologies, however, is not without difficulties. Graph measures may be influenced by the number of nodes (N) and the average degree (k) of the network. The explicit form of that influence depends on the type of network topology, which is usually unknown for experimental data. Direct comparisons of graph measures between empirical networks with different N and/or k can therefore yield spurious results. We list benefits and pitfalls of various approaches that intend to overcome these difficulties. We discuss the initial graph definition of unweighted graphs via fixed thresholds, average degrees or edge densities, and the use of weighted graphs. For instance, choosing a threshold to fix N and k does eliminate size and density effects but may lead to modifications of the network by enforcing (ignoring) non-significant (significant) connections. Opposed to fixing N and k, graph measures are often normalized via random surrogates but, in fact, this may even increase the sensitivity to differences in N and k for the commonly used clustering coefficient and small-world index. To avoid such a bias we tried to estimate the N,k-dependence for empirical networks, which can serve to correct for size effects, if successful. We also add a number of methods used in social sciences that build on statistics of local network structures including exponential random graph models and motif counting. We show that none of the here-investigated methods allows for a reliable and fully unbiased comparison, but some perform better than others.

  8. Random dynamics of the Morris-Lecar neural model.

    PubMed

    Tateno, Takashi; Pakdaman, Khashayar

    2004-09-01

    Determining the response characteristics of neurons to fluctuating noise-like inputs similar to realistic stimuli is essential for understanding neuronal coding. This study addresses this issue by providing a random dynamical system analysis of the Morris-Lecar neural model driven by a white Gaussian noise current. Depending on parameter selections, the deterministic Morris-Lecar model can be considered as a canonical prototype for widely encountered classes of neuronal membranes, referred to as class I and class II membranes. In both the transitions from excitable to oscillating regimes are associated with different bifurcation scenarios. This work examines how random perturbations affect these two bifurcation scenarios. It is first numerically shown that the Morris-Lecar model driven by white Gaussian noise current tends to have a unique stationary distribution in the phase space. Numerical evaluations also reveal quantitative and qualitative changes in this distribution in the vicinity of the bifurcations of the deterministic system. However, these changes notwithstanding, our numerical simulations show that the Lyapunov exponents of the system remain negative in these parameter regions, indicating that no dynamical stochastic bifurcations take place. Moreover, our numerical simulations confirm that, regardless of the asymptotic dynamics of the deterministic system, the random Morris-Lecar model stabilizes at a unique stationary stochastic process. In terms of random dynamical system theory, our analysis shows that additive noise destroys the above-mentioned bifurcation sequences that characterize class I and class II regimes in the Morris-Lecar model. The interpretation of this result in terms of neuronal coding is that, despite the differences in the deterministic dynamics of class I and class II membranes, their responses to noise-like stimuli present a reliable feature.

  9. The fragile breakage versus random breakage models of chromosome evolution.

    PubMed

    Peng, Qian; Pevzner, Pavel A; Tesler, Glenn

    2006-02-01

    For many years, studies of chromosome evolution were dominated by the random breakage theory, which implies that there are no rearrangement hot spots in the human genome. In 2003, Pevzner and Tesler argued against the random breakage model and proposed an alternative "fragile breakage" model of chromosome evolution. In 2004, Sankoff and Trinh argued against the fragile breakage model and raised doubts that Pevzner and Tesler provided any evidence of rearrangement hot spots. We investigate whether Sankoff and Trinh indeed revealed a flaw in the arguments of Pevzner and Tesler. We show that Sankoff and Trinh's synteny block identification algorithm makes erroneous identifications even in small toy examples and that their parameters do not reflect the realities of the comparative genomic architecture of human and mouse. We further argue that if Sankoff and Trinh had fixed these problems, their arguments in support of the random breakage model would disappear. Finally, we study the link between rearrangements and regulatory regions and argue that long regulatory regions and inhomogeneity of gene distribution in mammalian genomes may be responsible for the breakpoint reuse phenomenon.

  10. API Requirements for Dynamic Graph Prediction

    SciTech Connect

    Gallagher, B; Eliassi-Rad, T

    2006-10-13

    Given a large-scale time-evolving multi-modal and multi-relational complex network (a.k.a., a large-scale dynamic semantic graph), we want to implement algorithms that discover patterns of activities on the graph and learn predictive models of those discovered patterns. This document outlines the application programming interface (API) requirements for fast prototyping of feature extraction, learning, and prediction algorithms on large dynamic semantic graphs. Since our algorithms must operate on large-scale dynamic semantic graphs, we have chosen to use the graph API developed in the CASC Complex Networks Project. This API is supported on the back end by a semantic graph database (developed by Scott Kohn and his team). The advantages of using this API are (i) we have full-control of its development and (ii) the current API meets almost all of the requirements outlined in this document.

  11. Contact graphs of disk packings as a model of spatial planar networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongzhi; Guan, Jihong; Ding, Bailu; Chen, Lichao; Zhou, Shuigeng

    2009-08-01

    Spatially constrained planar networks are frequently encountered in real-life systems. In this paper, based on a space-filling disk packing we propose a minimal model for spatial maximal planar networks, which is similar to but different from the model for Apollonian networks (Andrade et al 2005 Phys. Rev. Lett. 94 018702). We present an exhaustive analysis of various properties of our model, and obtain the analytic solutions for most of the features, including degree distribution, clustering coefficient, average path length and degree correlations. The model recovers some striking generic characteristics observed in most real networks. To address the robustness of the relevant network properties, we compare the structural features between the investigated network and the Apollonian networks. We show that topological properties of the two networks are encoded in the way of disk packing. We argue that spatial constraints of nodes are relevant to the structure of the networks.

  12. Joint Bayesian variable and graph selection for regression models with network-structured predictors.

    PubMed

    Peterson, Christine B; Stingo, Francesco C; Vannucci, Marina

    2016-03-30

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications because it allows the identification of pathways of functionally related genes or proteins that impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival.

  13. Optimizing spread dynamics on graphs by message passing

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.

    2013-09-01

    Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).

  14. GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems

    SciTech Connect

    Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil; Schwan, Karsten

    2015-11-15

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and device.

  15. GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems

    SciTech Connect

    Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen; Schwan, Karsten

    2015-09-30

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and the device.

  16. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  17. Random unitary evolution model of quantum Darwinism with pure decoherence

    NASA Astrophysics Data System (ADS)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  18. Random unitary evolution model of quantum Darwinism with pure decoherence

    NASA Astrophysics Data System (ADS)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S- E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  19. Random-field Ising model on isometric lattices: Ground states and non-Porod scattering.

    PubMed

    Bupathy, Arunkumar; Banerjee, Varsha; Puri, Sanjay

    2016-01-01

    We use a computationally efficient graph cut method to obtain ground state morphologies of the random-field Ising model (RFIM) on (i) simple cubic (SC), (ii) body-centered cubic (BCC), and (iii) face-centered cubic (FCC) lattices. We determine the critical disorder strength Δ_{c} at zero temperature with high accuracy. For the SC lattice, our estimate (Δ_{c}=2.278±0.002) is consistent with earlier reports. For the BCC and FCC lattices, Δ_{c}=3.316±0.002 and 5.160±0.002, respectively, which are the most accurate estimates in the literature to date. The small-r behavior of the correlation function exhibits a cusp regime characterized by a cusp exponent α signifying fractal interfaces. In the paramagnetic phase, α=0.5±0.01 for all three lattices. In the ferromagnetic phase, the cusp exponent shows small variations due to the lattice structure. Consequently, the interfacial energy E_{i}(L) for an interface of size L is significantly different for the three lattices. This has important implications for nonequilibrium properties.

  20. Random-field Ising model on isometric lattices: Ground states and non-Porod scattering

    NASA Astrophysics Data System (ADS)

    Bupathy, Arunkumar; Banerjee, Varsha; Puri, Sanjay

    2016-01-01

    We use a computationally efficient graph cut method to obtain ground state morphologies of the random-field Ising model (RFIM) on (i) simple cubic (SC), (ii) body-centered cubic (BCC), and (iii) face-centered cubic (FCC) lattices. We determine the critical disorder strength Δc at zero temperature with high accuracy. For the SC lattice, our estimate (Δc=2.278 ±0.002 ) is consistent with earlier reports. For the BCC and FCC lattices, Δc=3.316 ±0.002 and 5.160 ±0.002 , respectively, which are the most accurate estimates in the literature to date. The small-r behavior of the correlation function exhibits a cusp regime characterized by a cusp exponent α signifying fractal interfaces. In the paramagnetic phase, α =0.5 ±0.01 for all three lattices. In the ferromagnetic phase, the cusp exponent shows small variations due to the lattice structure. Consequently, the interfacial energy Ei(L ) for an interface of size L is significantly different for the three lattices. This has important implications for nonequilibrium properties.

  1. Spatially random models, estimation theory, and robot arm dynamics

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1987-01-01

    Spatially random models provide an alternative to the more traditional deterministic models used to describe robot arm dynamics. These alternative models can be used to establish a relationship between the methodologies of estimation theory and robot dynamics. A new class of algorithms for many of the fundamental robotics problems of inverse and forward dynamics, inverse kinematics, etc. can be developed that use computations typical in estimation theory. The algorithms make extensive use of the difference equations of Kalman filtering and Bryson-Frazier smoothing to conduct spatial recursions. The spatially random models are very easy to describe and are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. The models can also be used to generate numerically the composite multibody system inertia matrix. This is done without resorting to the more common methods of deterministic modeling involving Lagrangian dynamics, Newton-Euler equations, etc. These methods make substantial use of human knowledge in derivation and minipulation of equations of motion for complex mechanical systems.

  2. On molecular graph comparison.

    PubMed

    Melo, Jenny A; Daza, Edgar

    2011-06-01

    Since the last half of the nineteenth century, molecular graphs have been present in several branches of chemistry. When used for molecular structure representation, they have been compared after mapping the corresponding graphs into mathematical objects. However, direct molecular comparison of molecular graphs is a research field less explored. The goal of this mini-review is to show some distance and similarity coefficients which were proposed to directly compare molecular graphs or which could be useful to do so.

  3. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  4. Random Resistor Network Model of Minimal Conductivity in Graphene

    NASA Astrophysics Data System (ADS)

    Cheianov, Vadim V.; Fal'Ko, Vladimir I.; Altshuler, Boris L.; Aleiner, Igor L.

    2007-10-01

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  5. Ground state nonuniversality in the random-field Ising model

    SciTech Connect

    Duxbury, P. M.; Meinke, J. H.

    2001-09-01

    Two attractive and often used ideas, namely, universality and the concept of a zero-temperature fixed point, are violated in the infinite-range random-field Ising model. In the ground state we show that the exponents can depend continuously on the disorder and so are nonuniversal. However, we also show that at finite temperature the thermal order-parameter exponent 1/2 is restored so that temperature is a relevant variable. Broader implications of these results are discussed.

  6. Social Aggregation in Pea Aphids: Experiment and Random Walk Modeling

    PubMed Central

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J.; Topaz, Chad M.

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control. PMID:24376691

  7. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  8. Single-cluster dynamics for the random-cluster model

    NASA Astrophysics Data System (ADS)

    Deng, Youjin; Qian, Xiaofeng; Blöte, Henk W. J.

    2009-09-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q -state Potts model to noninteger values q>1 . Its results for static quantities are in a satisfactory agreement with those of the existing Swendsen-Wang-Chayes-Machta (SWCM) algorithm, which involves a full-cluster decomposition of random-cluster configurations. We explore the critical dynamics of this algorithm for several two-dimensional Potts and random-cluster models. For integer q , the single-cluster algorithm can be reduced to the Wolff algorithm, for which case we find that the autocorrelation functions decay almost purely exponentially, with dynamic exponents zexp=0.07 (1), 0.521 (7), and 1.007 (9) for q=2 , 3, and 4, respectively. For noninteger q , the dynamical behavior of the single-cluster algorithm appears to be very dissimilar to that of the SWCM algorithm. For large critical systems, the autocorrelation function displays a range of power-law behavior as a function of time. The dynamic exponents are relatively large. We provide an explanation for this peculiar dynamic behavior.

  9. Single-cluster dynamics for the random-cluster model.

    PubMed

    Deng, Youjin; Qian, Xiaofeng; Blöte, Henk W J

    2009-09-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those of the existing Swendsen-Wang-Chayes-Machta (SWCM) algorithm, which involves a full-cluster decomposition of random-cluster configurations. We explore the critical dynamics of this algorithm for several two-dimensional Potts and random-cluster models. For integer q, the single-cluster algorithm can be reduced to the Wolff algorithm, for which case we find that the autocorrelation functions decay almost purely exponentially, with dynamic exponents z(exp)=0.07 (1), 0.521 (7), and 1.007 (9) for q=2, 3, and 4, respectively. For noninteger q, the dynamical behavior of the single-cluster algorithm appears to be very dissimilar to that of the SWCM algorithm. For large critical systems, the autocorrelation function displays a range of power-law behavior as a function of time. The dynamic exponents are relatively large. We provide an explanation for this peculiar dynamic behavior.

  10. Vortices and superfields on a graph

    NASA Astrophysics Data System (ADS)

    Kan, Nahomi; Kobayashi, Koichiro; Shiraishi, Kiyoshi

    2009-08-01

    We extend the dimensional deconstruction by utilizing the knowledge of graph theory. In the dimensional deconstruction, one uses the moose diagram to exhibit the structure of the “theory space.” We generalize the moose diagram to a general graph with oriented edges. In the present paper, we consider only the U(1) gauge symmetry. We also introduce supersymmetry into our model by use of superfields. We suppose that vector superfields reside at the vertices and chiral superfields at the edges of a given graph. Then we can consider multivector, multi-Higgs models. In our model, [U(1)]p (where p is the number of vertices) is broken to a single U(1). Therefore, for specific graphs, we get vortexlike classical solutions in our model. We show some examples of the graphs admitting the vortex solutions of simple structure as the Bogomolnyi solution.

  11. Vortices and superfields on a graph

    SciTech Connect

    Kan, Nahomi; Kobayashi, Koichiro; Shiraishi, Kiyoshi

    2009-08-15

    We extend the dimensional deconstruction by utilizing the knowledge of graph theory. In the dimensional deconstruction, one uses the moose diagram to exhibit the structure of the 'theory space'. We generalize the moose diagram to a general graph with oriented edges. In the present paper, we consider only the U(1) gauge symmetry. We also introduce supersymmetry into our model by use of superfields. We suppose that vector superfields reside at the vertices and chiral superfields at the edges of a given graph. Then we can consider multivector, multi-Higgs models. In our model, [U(1)]{sup p} (where p is the number of vertices) is broken to a single U(1). Therefore, for specific graphs, we get vortexlike classical solutions in our model. We show some examples of the graphs admitting the vortex solutions of simple structure as the Bogomolnyi solution.

  12. Graphing Inequalities, Connecting Meaning

    ERIC Educational Resources Information Center

    Switzer, J. Matt

    2014-01-01

    Students often have difficulty with graphing inequalities (see Filloy, Rojano, and Rubio 2002; Drijvers 2002), and J. Matt Switzer's students were no exception. Although students can produce graphs for simple inequalities, they often struggle when the format of the inequality is unfamiliar. Even when producing a correct graph of an…

  13. Graphing Important People

    ERIC Educational Resources Information Center

    Reading Teacher, 2012

    2012-01-01

    The "Toolbox" column features content adapted from ReadWriteThink.org lesson plans and provides practical tools for classroom teachers. This issue's column features a lesson plan adapted from "Graphing Plot and Character in a Novel" by Lisa Storm Fink and "Bio-graph: Graphing Life Events" by Susan Spangler. Students retell biographic events…

  14. Nonlinear system modeling with random matrices: echo state networks revisited.

    PubMed

    Zhang, Bai; Miller, David J; Wang, Yue

    2012-01-01

    Echo state networks (ESNs) are a novel form of recurrent neural networks (RNNs) that provide an efficient and powerful computational model approximating nonlinear dynamical systems. A unique feature of an ESN is that a large number of neurons (the "reservoir") are used, whose synaptic connections are generated randomly, with only the connections from the reservoir to the output modified by learning. Why a large randomly generated fixed RNN gives such excellent performance in approximating nonlinear systems is still not well understood. In this brief, we apply random matrix theory to examine the properties of random reservoirs in ESNs under different topologies (sparse or fully connected) and connection weights (Bernoulli or Gaussian). We quantify the asymptotic gap between the scaling factor bounds for the necessary and sufficient conditions previously proposed for the echo state property. We then show that the state transition mapping is contractive with high probability when only the necessary condition is satisfied, which corroborates and thus analytically explains the observation that in practice one obtains echo states when the spectral radius of the reservoir weight matrix is smaller than 1.

  15. Mining Discriminative Patterns from Graph Data with Multiple Labels and Its Application to Quantitative Structure-Activity Relationship (QSAR) Models.

    PubMed

    Shao, Zheng; Hirayama, Yuya; Yamanishi, Yoshihiro; Saigo, Hiroto

    2015-12-28

    Graph data are becoming increasingly common in machine learning and data mining, and its application field pervades to bioinformatics and cheminformatics. Accordingly, as a method to extract patterns from graph data, graph mining recently has been studied and developed rapidly. Since the number of patterns in graph data is huge, a central issue is how to efficiently collect informative patterns suitable for subsequent tasks such as classification or regression. In this paper, we consider mining discriminative subgraphs from graph data with multiple labels. The resulting task has important applications in cheminformatics, such as finding common functional groups that trigger multiple drug side effects, or identifying ligand functional groups that hit multiple targets. In computational experiments, we first verify the effectiveness of the proposed approach in synthetic data, then we apply it to drug adverse effect prediction problem. In the latter dataset, we compared the proposed method with L1-norm logistic regression in combination with the PubChem/Open Babel fingerprint, in that the proposed method showed superior performance with a much smaller number of subgraph patterns. Software is available from https://github.com/axot/GLP.

  16. Survey of Approaches to Generate Realistic Synthetic Graphs

    SciTech Connect

    Lim, Seung-Hwan; Lee, Sangkeun; Powers, Sarah S; Shankar, Mallikarjun; Imam, Neena

    2016-10-01

    A graph is a flexible data structure that can represent relationships between entities. As with other data analysis tasks, the use of realistic graphs is critical to obtaining valid research results. Unfortunately, using the actual ("real-world") graphs for research and new algorithm development is difficult due to the presence of sensitive information in the data or due to the scale of data. This results in practitioners developing algorithms and systems that employ synthetic graphs instead of real-world graphs. Generating realistic synthetic graphs that provide reliable statistical confidence to algorithmic analysis and system evaluation involves addressing technical hurdles in a broad set of areas. This report surveys the state of the art in approaches to generate realistic graphs that are derived from fitted graph models on real-world graphs.

  17. Principal Graph and Structure Learning Based on Reversed Graph Embedding.

    PubMed

    Mao, Qi; Wang, Li; Tsang, Ivor; Sun, Yijun

    2016-12-05

    Many scientific datasets are of high dimension, and the analysis usually requires retaining the most important structures of data. Principal curve is a widely used approach for this purpose. However, many existing methods work only for data with structures that are mathematically formulated by curves, which is quite restrictive for real applications. A few methods can overcome the above problem, but they either require complicated human-made rules for a specific task with lack of adaption flexibility to different tasks, or cannot obtain explicit structures of data. To address these issues, we develop a novel principal graph and structure learning framework that captures the local information of the underlying graph structure based on reversed graph embedding. As showcases, models that can learn a spanning tree or a weighted undirected `1 graph are proposed, and a new learning algorithm is developed that learns a set of principal points and a graph structure from data, simultaneously. The new algorithm is simple with guaranteed convergence. We then extend the proposed framework to deal with large-scale data. Experimental results on various synthetic and six real world datasets show that the proposed method compares favorably with baselines and can uncover the underlying structure correctly.

  18. GraphMeta: Managing HPC Rich Metadata in Graphs

    SciTech Connect

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Zhang, Wei; Ross, Robert

    2016-01-01

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introduces significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.

  19. From time series to complex networks: the visibility graph.

    PubMed

    Lacasa, Lucas; Luque, Bartolo; Ballesteros, Fernando; Luque, Jordi; Nuño, Juan Carlos

    2008-04-01

    In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view.

  20. Identifying common components across biological network graphs using a bipartite data model.

    PubMed

    Baker, Ej; Culpepper, C; Philips, C; Bubier, J; Langston, M; Chesler, Ej

    2014-01-01

    The GeneWeaver bipartite data model provides an efficient means to evaluate shared molecular components from sets derived across diverse species, disease states and biological processes. In order to adapt this model for examining related molecular components and biological networks, such as pathway or gene network data, we have developed a means to leverage the bipartite data structure to extract and analyze shared edges. Using the Pathway Commons database we demonstrate the ability to rapidly identify shared connected components among a diverse set of pathways. In addition, we illustrate how results from maximal bipartite discovery can be decomposed into hierarchical relationships, allowing shared pathway components to be mapped through various parent-child relationships to help visualization and discovery of emergent kernel driven relationships. Interrogating common relationships among biological networks and conventional GeneWeaver gene lists will increase functional specificity and reliability of the shared biological components. This approach enables self-organization of biological processes through shared biological networks.

  1. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  2. Randomized shortest-path problems: two related models.

    PubMed

    Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh

    2009-08-01

    This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by

  3. Random Boolean network models and the yeast transcriptional network

    NASA Astrophysics Data System (ADS)

    Kauffman, Stuart; Peterson, Carsten; Samuelsson, Björn; Troein, Carl

    2003-12-01

    The recently measured yeast transcriptional network is analyzed in terms of simplified Boolean network models, with the aim of determining feasible rule structures, given the requirement of stable solutions of the generated Boolean networks. We find that for ensembles of generated models, those with canalyzing Boolean rules are remarkably stable, whereas those with random Boolean rules are only marginally stable. Furthermore, substantial parts of the generated networks are frozen, in the sense that they reach the same state regardless of initial state. Thus, our ensemble approach suggests that the yeast network shows highly ordered dynamics.

  4. Experimental quantum annealing: case study involving the graph isomorphism problem

    PubMed Central

    Zick, Kenneth M.; Shehab, Omar; French, Matthew

    2015-01-01

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973

  5. Contact Graph Routing

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic

  6. Implementation aspects of Graph Neural Networks

    NASA Astrophysics Data System (ADS)

    Barcz, A.; Szymański, Z.; Jankowski, S.

    2013-10-01

    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  7. Interval process model and non-random vibration analysis

    NASA Astrophysics Data System (ADS)

    Jiang, C.; Ni, B. Y.; Liu, N. Y.; Han, X.; Liu, J.

    2016-07-01

    This paper develops an interval process model for time-varying or dynamic uncertainty analysis when information of the uncertain parameter is inadequate. By using the interval process model to describe a time-varying uncertain parameter, only its upper and lower bounds are required at each time point rather than its precise probability distribution, which is quite different from the traditional stochastic process model. A correlation function is defined for quantification of correlation between the uncertain-but-bounded variables at different times, and a matrix-decomposition-based method is presented to transform the original dependent interval process into an independent one for convenience of subsequent uncertainty analysis. More importantly, based on the interval process model, a non-random vibration analysis method is proposed for response computation of structures subjected to time-varying uncertain external excitations or loads. The structural dynamic responses thus can be derived in the form of upper and lower bounds, providing an important guidance for practical safety analysis and reliability design of structures. Finally, two numerical examples and one engineering application are investigated to demonstrate the feasibility of the interval process model and corresponding non-random vibration analysis method.

  8. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  9. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  10. A stochastic model of randomly accelerated walkers for human mobility

    PubMed Central

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-01-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984

  11. Methods of visualizing graphs

    DOEpatents

    Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.

    2008-12-23

    Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.

  12. Graph Estimation From Multi-Attribute Data

    PubMed Central

    Kolar, Mladen; Liu, Han; Xing, Eric P.

    2014-01-01

    Undirected graphical models are important in a number of modern applications that involve exploring or exploiting dependency structures underlying the data. For example, they are often used to explore complex systems where connections between entities are not well understood, such as in functional brain networks or genetic networks. Existing methods for estimating structure of undirected graphical models focus on scenarios where each node represents a scalar random variable, such as a binary neural activation state or a continuous mRNA abundance measurement, even though in many real world problems, nodes can represent multivariate variables with much richer meanings, such as whole images, text documents, or multi-view feature vectors. In this paper, we propose a new principled framework for estimating the structure of undirected graphical models from such multivariate (or multi-attribute) nodal data. The structure of a graph is inferred through estimation of non-zero partial canonical correlation between nodes. Under a Gaussian model, this strategy is equivalent to estimating conditional independencies between random vectors represented by the nodes and it generalizes the classical problem of covariance selection (Dempster, 1972). We relate the problem of estimating non-zero partial canonical correlations to maximizing a penalized Gaussian likelihood objective and develop a method that efficiently maximizes this objective. Extensive simulation studies demonstrate the effectiveness of the method under various conditions. We provide illustrative applications to uncovering gene regulatory networks from gene and protein profiles, and uncovering brain connectivity graph from positron emission tomography data. Finally, we provide sufficient conditions under which the true graphical structure can be recovered correctly. PMID:25620892

  13. The linear Ising model and its analytic continuation, random walk

    NASA Astrophysics Data System (ADS)

    Lavenda, B. H.

    2004-02-01

    A generalization of Gauss's principle is used to derive the error laws corresponding to Types II and VII distributions in Pearson's classification scheme. Student's r-p.d.f. (Type II) governs the distribution of the internal energy of a uniform, linear chain, Ising model, while the analytic continuation of the uniform exchange energy converts it into a Student t-density (Type VII) for the position of a random walk in a single spatial dimension. Higher-dimensional spaces, corresponding to larger degrees of freedom and generalizations to multidimensional Student r- and t-densities, are obtained by considering independent and identically random variables, having rotationally invariant densities, whose entropies are additive and generating functions are multiplicative.

  14. Sensitivity analysis of random shell-model interactions

    NASA Astrophysics Data System (ADS)

    Krastev, Plamen; Johnson, Calvin

    2010-02-01

    The input to the configuration-interaction shell model includes many dozens or even hundreds of independent two-body matrix elements. Previous studies have shown that when fitting to experimental low-lying spectra, the greatest sensitivity is to only a few linear combinations of matrix elements. Following Brown and Richter [1], here we consider general two-body interactions in the 1s-0d shell and find that the low-lying spectra are also only sensitive to a few linear combinations of two-body matrix elements. We find out in particular the ground state energies for both the random and non-random (here given by the USDB) interaction are dominated by similar matrix elements, which we try to interpret in terms of monopole and contact interactions, while the excitation energies have completely different character. [4pt] [1] B. Alex Brown and W. A. Richter, Phys. Rev. C 74, 034315 (2006) )

  15. Thermodynamical Limit for Correlated Gaussian Random Energy Models

    NASA Astrophysics Data System (ADS)

    Contucci, P.; Esposti, M. Degli; Giardinà, C.; Graffi, S.

    Let {EΣ(N)}ΣΣN be a family of |ΣN|=2N centered unit Gaussian random variables defined by the covariance matrix CN of elements cN(Σ,τ):=Av(EΣ(N)Eτ(N)) and the corresponding random Hamiltonian. Then the quenched thermodynamical limit exists if, for every decomposition N=N1+N2, and all pairs (Σ,τ)ΣN×ΣN: where πk(Σ),k=1,2 are the projections of ΣΣN into ΣNk. The condition is explicitly verified for the Sherrington-Kirkpatrick, the even p-spin, the Derrida REM and the Derrida-Gardner GREM models.

  16. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  17. Graph theoretical analysis, insilico modeling, design, and synthesis of compounds containing benzimidazole skeleton as antidepressant agents.

    PubMed

    Theivendren, Panneerselvam; Subramanian, Arumugam; Murugan, Indhumathy; Joshi, Shrinivas D; More, Uttam A

    2016-10-31

    In this study, drug target was identified using KEGG database and network analysis through Cytoscape software. Designed series of novel benzimidazoles were taken along with reference standard Flibanserin for insilico modeling. The novel 4-(1H-benzo[d]imidazol-2-yl)-N-(substituted phenyl)-4-oxobutanamide (3a-j) analogs were synthesized and evaluated for their antidepressant activity. Reaction of 4-(1H-benzo[d]imidazol-2-yl)-4-oxobutanoic acid (1) with 4-(1H-benzo [d] imidazol-2-yl)-4-oxobutanoyl chloride (2) furnished novel 4-(1H-benzo [d] imidazol-2-yl)-N-(substituted phenyl)-4-oxobutanamide (3a-j). All the newly synthesized compounds were characterized by IR, (1) H-NMR, and mass spectral analysis. The antidepressant activities of synthesized derivatives were compared with standard drug clomipramine at a dose level of 20 mg/kg. Among the derivatives tested, most of the compounds were found to have potent activity against depression. The high level of activity was shown by the compounds 3d, 3e, 3i, and it significantly reduced the duration of immobility time at the dose level of 50 mg/kg.

  18. Random-forcing model of the mesoscale oceanic eddies

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel S.

    2005-04-01

    The role of mesoscale oceanic eddies in driving large-scale currents is studied in an eddy-resolving midlatitude double-gyre ocean model. The reference solution is decomposed into large-scale and eddy components in a way which is dynamically consistent with a non-eddy-resolving ocean model. That is, the non-eddy-resolving solution driven by this eddy-forcing history, calculated on the basis of this decomposition, correctly approximates the original flow. The main effect of the eddy forcing on the large-scale flow is to enhance the eastward-jet extension of the subtropical western boundary current. This is an anti-diffusive process, which cannot be represented in terms of turbulent diffusion. It is shown that the eddy-forcing history can be approximated as a space-time correlated, random-forcing process in such a way that the non-eddy-resolving solution correctly approximates the reference solution. Thus, the random-forcing model can potentially replace the diffusion model, which is commonly used to parameterize eddy effects on the large-scale currents. The eddy-forcing statistics are treated as spatially inhomogeneous but stationary, and the dynamical roles of space-time correlations and spatial inhomogeneities are systematically explored. The integral correlation time, oscillations of the space correlations, and inhomogeneity of the variance are found to be particularly important for the flow response.

  19. Connectivity properties of the random-cluster model

    NASA Astrophysics Data System (ADS)

    Weigel, Martin; Metin Elci, Eren; Fytas, Nikolaos G.

    2016-02-01

    We investigate the connectivity properties of the random-cluster model mediated by bridge bonds that, if removed, lead to the generation of new connected components. We study numerically the density of bridges and the fragmentation kernel, i.e., the relative sizes of the generated fragments, and find that these quantities follow a scaling description. The corresponding scaling exponents are related to well known equilibrium critical exponents of the model. Using the Russo-Margulis formalism, we derive an exact relation between the expected density of bridges and the number of active edges. The same approach allows us to study the fluctuations in the numbers of bridges, thereby uncovering a new singularity in the random- cluster model as q < 4 cos2 (π/√3) in two dimensions. For numerical simulations of the model directly in the language of individual bonds, known as Sweeny's algorithm, the prevalence of bridges and the scaling of the sizes of clusters connected by bridges and candidate-bridges play a pivotal role. We discuss several different implementations of the necessary connectivity algorithms and assess their relative performance.

  20. Auditory model: effects on learning under blocked and random practice schedules.

    PubMed

    Han, Dong-Wook; Shea, Charles H

    2008-12-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice. Participants (N = 80) were randomly assigned to one of eight practice conditions, which differed in practice schedule (blocked-blocked, blocked-random, random-blocked, random-random) and auditory model (no model, model). The results indicated that the auditory model enhanced relative timing performance on the delayed retention test regardless of the practice schedule, but it did not influence the learning of absolute timing. Blocked-blocked and blocked-random practice conditions resulted in enhanced relative timing retention performance relative to random-blocked and random-random practice schedules. Random-random and blocked-random practice schedules resulted in better absolute timing than blocked-blocked or random-blocked practice, regardless of the presence or absence of an auditory model during acquisition. Thus, considering both relative and absolute timing, the blocked-random practice condition resulted in overall learning superior to the other practice schedules. The results also suggest that an auditory model produces an added effect on learning relative timing regardless of the practice schedule, but it does not influence the learning of absolute timing.

  1. Modeling Temporal Variation in Social Network: An Evolutionary Web Graph Approach

    NASA Astrophysics Data System (ADS)

    Mitra, Susanta; Bagchi, Aditya

    A social network is a social structure between actors (individuals, organization or other social entities) and indicates the ways in which they are connected through various social relationships like friendships, kinships, professional, academic etc. Usually, a social network represents a social community, like a club and its members or a city and its citizens etc. or a research group communicating over Internet. In seventies Leinhardt [1] first proposed the idea of representing a social community by a digraph. Later, this idea became popular among other research workers like, network designers, web-service application developers and e-learning modelers. It gave rise to a rapid proliferation of research work in the area of social network analysis. Some of the notable structural properties of a social network are connectedness between actors, reachability between a source and a target actor, reciprocity or pair-wise connection between actors with bi-directional links, centrality of actors or the important actors having high degree or more connections and finally the division of actors into sub-structures or cliques or strongly-connected components. The cycles present in a social network may even be nested [2, 3]. The formal definition of these structural properties will be provided in Sect. 8.2.1. The division of actors into cliques or sub-groups can be a very important factor for understanding a social structure, particularly the degree of cohesiveness in a community. The number, size, and connections among the sub-groups in a network are useful in understanding how the network, as a whole, is likely to behave.

  2. Graph theory for analyzing pair-wise data: application to geophysical model parameters estimated from interferometric synthetic aperture radar data at Okmok volcano, Alaska

    NASA Astrophysics Data System (ADS)

    Reinisch, Elena C.; Cardiff, Michael; Feigl, Kurt L.

    2017-01-01

    Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is (6.2 ± 0.6) × 10^6 m^3/year . Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.

  3. An adaptive grid for graph-based segmentation in retinal OCT

    PubMed Central

    Lang, Andrew; Carass, Aaron; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

    2016-01-01

    Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer’s thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer’s thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries. PMID:27773959

  4. a Bidirectional Reflectance Model for Non-Random Canopies.

    NASA Astrophysics Data System (ADS)

    Welles, Jonathan Mark

    The general array model (GAR) is extended to calculate bidirectional reflectance (reflectance as a function of angle of view and angle of illumination) of a plant stand. The new model (BIGAR) defines the plant canopy as one or more foliage-containing ellipsoids arranged in any desired pattern. Foliage is assumed randomly distributed within each ellipsoid, with a specified distribution of inclination angles and random azimuthal orientation distribution. A method of specifying sub-ellipsoids that contain foliage of varying properties is discussed. Foliage is assumed to scatter radiation in a Lambertian fashion. The soil bidirectional reflectance is modelled separately as a boundary condition. The reflectance of any given grid point within the plant stand is calculated from the incident radiation (direct beam, diffuse sky, and diffuse scattered from the soil and foliage) and a view weighting factor that is based upon how much of the view is occupied by that particular grid point. Integrating this over a large number of grid locations provides a prediction of the bidirectional reflectance. Model predictions are compared with measurements in corn and soybean canopies at three stages of growth. The model does quite well in predicting the general shape and dynamics of the measured bidirectional reflectance factors, and rms errors are typically 10% to 15% (relative) of the integrated reflectance value. The effect of rows is evident in both the measurements and the model in the early part of the growing season. The presence of tassles in the corn may be the cause of unpredicted row effects later in the season. Predicted nadir reflectances are accurate for soybean, but are low for full cover corn. The presence of specular reflection causes the model to slightly underpredict reflectances looking toward the sun at large solar zenith angles.

  5. Graph theory enables drug repurposing--how a mathematical model can drive the discovery of hidden mechanisms of action.

    PubMed

    Gramatica, Ruggero; Di Matteo, T; Giorgetti, Stefano; Barbiani, Massimo; Bevec, Dorian; Aste, Tomaso

    2014-01-01

    We introduce a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. We propose a measure for the likeliness of these paths based on a stochastic process on the graph. This measure depends on the abundance of indirect paths between a peptide and a disease, rather than solely on the strength of the shortest path connecting them. We provide real-world examples, showing how the method successfully retrieves known pathophysiological Mode of Action and finds new ones by meaningfully selecting and aggregating contributions from known bio-molecular interactions. Applications of this methodology are presented, and prove the efficacy of the method for selecting drugs as treatment options for rare diseases.

  6. Graph Theory Enables Drug Repurposing – How a Mathematical Model Can Drive the Discovery of Hidden Mechanisms of Action

    PubMed Central

    Gramatica, Ruggero; Di Matteo, T.; Giorgetti, Stefano; Barbiani, Massimo; Bevec, Dorian; Aste, Tomaso

    2014-01-01

    We introduce a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. We propose a measure for the likeliness of these paths based on a stochastic process on the graph. This measure depends on the abundance of indirect paths between a peptide and a disease, rather than solely on the strength of the shortest path connecting them. We provide real-world examples, showing how the method successfully retrieves known pathophysiological Mode of Action and finds new ones by meaningfully selecting and aggregating contributions from known bio-molecular interactions. Applications of this methodology are presented, and prove the efficacy of the method for selecting drugs as treatment options for rare diseases. PMID:24416311

  7. Modeling the Relationships between Test-Taking Strategies and Test Performance on a Graph-Writing Task: Implications for EAP

    ERIC Educational Resources Information Center

    Yang, Hui-Chun

    2012-01-01

    With the increasing use of integrated tasks in assessing writing, more and more research studies have been conducted to examine the construct validity of such tasks. Previous studies have largely focused on reading-writing tasks, while relatively little is known about graph-writing tasks. This study examines second language (L2) writers'…

  8. Critical behavior of the Ising model on random fractals.

    PubMed

    Monceau, Pascal

    2011-11-01

    We study the critical behavior of the Ising model in the case of quenched disorder constrained by fractality on random Sierpinski fractals with a Hausdorff dimension d(f) is approximately equal to 1.8928. This is a first attempt to study a situation between the borderline cases of deterministic self-similarity and quenched randomness. Intensive Monte Carlo simulations were carried out. Scaling corrections are much weaker than in the deterministic cases, so that our results enable us to ensure that finite-size scaling holds, and that the critical behavior is described by a new universality class. The hyperscaling relation is compatible with an effective dimension equal to the Hausdorff one; moreover the two eigenvalues exponents of the renormalization flows are shown to be different from the ones calculated from ε expansions, and from the ones obtained for fourfold symmetric deterministic fractals. Although the space dimensionality is not integer, lack of self-averaging properties exhibits some features very close to the ones of a random fixed point associated with a relevant disorder.

  9. The 'shape' of phylogenies under simple random speciation models

    NASA Astrophysics Data System (ADS)

    Steel, Mike; McKenzie, Andy

    We describe some discrete structural properties of evolutionary trees generated under simple null models of speciation, such as the Yule model. These models have been used as priors in Bayesian approaches to phylogenetic analysis, and also to test hypotheses concerning the speciation process. Here we describe new results for four properties of trees generated under such models. Firstly, for a rooted tree generated by the Yule model we describe the probability distribution on the depth (number of edges from the root) of the most recent common ancestor of a random subset of k species. Secondly, for trees generated under the Yule and uniform models, we describe the induced distribution they generate on the number Cn of cherries in the tree, where a cherry is a pair of leaves each of which is adjacent to a common ancestor. Next we show that, for trees generated under the Yule model, the approximate position of the root can be estimated from the associated unrooted tree, even for trees with a large number of leaves. Finally, we analyse a biologically-motivated extension of the Yule model and describe its distribution on tree shapes when speciation occurs in rapid bursts.

  10. Zero temperature landscape of the random sine-Gordon model

    SciTech Connect

    Sanchez, A.; Bishop, A.R.; Cai, D.

    1997-04-01

    We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length ({approx} 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r){sup 2} type up to a certain correlation length ({xi} {approx} 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis.

  11. Random-effects models for serial observations with binary response

    SciTech Connect

    Stiratelli, R.; Laird, N.; Ware, J.H.

    1984-12-01

    This paper presents a general mixed model for the analysis of serial dichotomous responses provided by a panel of study participants. Each subject's serial responses are assumed to arise from a logistic model, but with regression coefficients that vary between subjects. The logistic regression parameters are assumed to be normally distributed in the population. Inference is based upon maximum likelihood estimation of fixed effects and variance components, and empirical Bayes estimation of random effects. Exact solutions are analytically and computationally infeasible, but an approximation based on the mode of the posterior distribution of the random parameters is proposed, and is implemented by means of the EM algorithm. This approximate method is compared with a simpler two-step method proposed by Korn and Whittemore, using data from a panel study of asthmatics originally described in that paper. One advantage of the estimation strategy described here is the ability to use all of the data, including that from subjects with insufficient data to permit fitting of a separate logistic regression model, as required by the Korn and Whittemore method. However, the new method is computationally intensive.

  12. Bouchaud-Mézard model on a random network

    NASA Astrophysics Data System (ADS)

    Ichinomiya, Takashi

    2012-09-01

    We studied the Bouchaud-Mézard (BM) model, which was introduced to explain Pareto's law in a real economy, on a random network. Using “adiabatic and independent” assumptions, we analytically obtained the stationary probability distribution function of wealth. The results show that wealth condensation, indicated by the divergence of the variance of wealth, occurs at a larger J than that obtained by the mean-field theory, where J represents the strength of interaction between agents. We compared our results with numerical simulation results and found that they were in good agreement.

  13. Critical Interfaces in the Random-Bond Potts Model

    NASA Astrophysics Data System (ADS)

    Jacobsen, Jesper L.; Le Doussal, Pierre; Picco, Marco; Santachiara, Raoul; Wiese, Kay Jörg

    2009-02-01

    We study geometrical properties of interfaces in the random-temperature q-states Potts model as an example of a conformal field theory weakly perturbed by quenched disorder. Using conformal perturbation theory in q-2 we compute the fractal dimension of Fortuin-Kasteleyn (FK) domain walls. We also compute it numerically both via the Wolff cluster algorithm for q=3 and via transfer-matrix evaluations. We also obtain numerical results for the fractal dimension of spin clusters interfaces for q=3. These are found numerically consistent with the duality κspinκFK=16 as expressed in putative SLE parameters.

  14. Critical interfaces in the random-bond Potts model.

    PubMed

    Jacobsen, Jesper L; Le Doussal, Pierre; Picco, Marco; Santachiara, Raoul; Wiese, Kay Jörg

    2009-02-20

    We study geometrical properties of interfaces in the random-temperature q-states Potts model as an example of a conformal field theory weakly perturbed by quenched disorder. Using conformal perturbation theory in q-2 we compute the fractal dimension of Fortuin-Kasteleyn (FK) domain walls. We also compute it numerically both via the Wolff cluster algorithm for q=3 and via transfer-matrix evaluations. We also obtain numerical results for the fractal dimension of spin clusters interfaces for q=3. These are found numerically consistent with the duality kappaspinkappaFK=16 as expressed in putative SLE parameters.

  15. Mechanisms of evolution of avalanches in regular graphs.

    PubMed

    Handford, Thomas P; Pérez-Reche, Francisco J; Taraskin, Sergei N

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  16. Generative Graph Prototypes from Information Theory.

    PubMed

    Han, Lin; Wilson, Richard C; Hancock, Edwin R

    2015-10-01

    In this paper we present a method for constructing a generative prototype for a set of graphs by adopting a minimum description length approach. The method is posed in terms of learning a generative supergraph model from which the new samples can be obtained by an appropriate sampling mechanism. We commence by constructing a probability distribution for the occurrence of nodes and edges over the supergraph. We encode the complexity of the supergraph using an approximate Von Neumann entropy. A variant of the EM algorithm is developed to minimize the description length criterion in which the structure of the supergraph and the node correspondences between the sample graphs and the supergraph are treated as missing data. To generate new graphs, we assume that the nodes and edges of graphs arise under independent Bernoulli distributions and sample new graphs according to their node and edge occurrence probabilities. Empirical evaluations on real-world databases demonstrate the practical utility of the proposed algorithm and show the effectiveness of the generative model for the tasks of graph classification, graph clustering and generating new sample graphs.

  17. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  18. Graph500 in OpenSHMEM

    SciTech Connect

    D'Azevedo, Ed F; Imam, Neena

    2015-01-01

    This document describes the effort to implement the Graph 500 benchmark using OpenSHMEM based on the MPI-2 one-side version. The Graph 500 benchmark performs a breadth-first search in parallel on a large randomly generated undirected graph and can be implemented using basic MPI-1 and MPI-2 one-sided communication. Graph 500 requires atomic bit-wise operations on unsigned long integers but neither atomic bit-wise operations nor OpenSHMEM for unsigned long are available in OpenSHEM. Such needed bit-wise atomic operations and support for unsigned long are implemented using atomic condition swap (CSWAP) on signed long integers. Preliminary results on comparing the OpenSHMEM and MPI-2 one-sided implementations on a Silicon Graphics Incorporated (SGI) cluster and the Cray XK7 are presented.

  19. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  20. Rigorously testing multialternative decision field theory against random utility models.

    PubMed

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions.

  1. Measuring Graph Comprehension, Critique, and Construction in Science

    ERIC Educational Resources Information Center

    Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.

    2016-01-01

    Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed…

  2. Community detection in directed acyclic graphs

    NASA Astrophysics Data System (ADS)

    Speidel, Leo; Takaguchi, Taro; Masuda, Naoki

    2015-08-01

    Some temporal networks, most notably citation networks, are naturally represented as directed acyclic graphs (DAGs). To detect communities in DAGs, we propose a modularity for DAGs by defining an appropriate null model (i.e., randomized network) respecting the order of nodes. We implement a spectral method to approximately maximize the proposed modularity measure and test the method on citation networks and other DAGs. We find that the attained values of the modularity for DAGs are similar for partitions that we obtain by maximizing the proposed modularity (designed for DAGs), the modularity for undirected networks and that for general directed networks. In other words, if we neglect the order imposed on nodes (and the direction of links) in a given DAG and maximize the conventional modularity measure, the obtained partition is close to the optimal one in the sense of the modularity for DAGs. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  3. Expert interpretation of bar and line graphs: the role of graphicacy in reducing the effect of graph format.

    PubMed

    Peebles, David; Ali, Nadia

    2015-01-01

    The distinction between informational and computational equivalence of representations, first articulated by Larkin and Simon (1987) has been a fundamental principle in the analysis of diagrammatic reasoning which has been supported empirically on numerous occasions. We present an experiment that investigates this principle in relation to the performance of expert graph users of 2 × 2 "interaction" bar and line graphs. The study sought to determine whether expert interpretation is affected by graph format in the same way that novice interpretations are. The findings revealed that, unlike novices-and contrary to the assumptions of several graph comprehension models-experts' performance was the same for both graph formats, with their interpretation of bar graphs being no worse than that for line graphs. We discuss the implications of the study for guidelines for presenting such data and for models of expert graph comprehension.

  4. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.

  5. Modeling multivariate survival data by a semiparametric random effects proportional odds model.

    PubMed

    Lam, K F; Lee, Y W; Leung, T L

    2002-06-01

    In this article, the focus is on the analysis of multivariate survival time data with various types of dependence structures. Examples of multivariate survival data include clustered data and repeated measurements from the same subject, such as the interrecurrence times of cancer tumors. A random effect semiparametric proportional odds model is proposed as an alternative to the proportional hazards model. The distribution of the random effects is assumed to be multivariate normal and the random effect is assumed to act additively to the baseline log-odds function. This class of models, which includes the usual shared random effects model, the additive variance components model, and the dynamic random effects model as special cases, is highly flexible and is capable of modeling a wide range of multivariate survival data. A unified estimation procedure is proposed to estimate the regression and dependence parameters simultaneously by means of a marginal-likelihood approach. Unlike the fully parametric case, the regression parameter estimate is not sensitive to the choice of correlation structure of the random effects. The marginal likelihood is approximated by the Monte Carlo method. Simulation studies are carried out to investigate the performance of the proposed method. The proposed method is applied to two well-known data sets, including clustered data and recurrent event times data.

  6. A random interacting network model for complex networks

    PubMed Central

    Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen

    2015-01-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032

  7. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  8. Topologies on directed graphs

    NASA Technical Reports Server (NTRS)

    Lieberman, R. N.

    1972-01-01

    Given a directed graph, a natural topology is defined and relationships between standard topological properties and graph theoretical concepts are studied. In particular, the properties of connectivity and separatedness are investigated. A metric is introduced which is shown to be related to separatedness. The topological notions of continuity and homeomorphism. A class of maps is studied which preserve both graph and topological properties. Applications involving strong maps and contractions are also presented.

  9. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it

  10. Patterns in randomly evolving networks: Idiotypic networks

    NASA Astrophysics Data System (ADS)

    Brede, Markus; Behn, Ulrich

    2003-03-01

    We present a model for the evolution of networks of occupied sites on undirected regular graphs. At every iteration step in a parallel update, I randomly chosen empty sites are occupied and occupied sites having occupied neighbor degree outside of a given interval (tl,tu) are set empty. Depending on the influx I and the values of both lower threshold and upper threshold of the occupied neighbor degree, different kinds of behavior can be observed. In certain regimes stable long-living patterns appear. We distinguish two types of patterns: static patterns arising on graphs with low connectivity and dynamic patterns found on high connectivity graphs. Increasing I patterns become unstable and transitions between almost stable patterns, interrupted by disordered phases, occur. For still larger I the lifetime of occupied sites becomes very small and network structures are dominated by randomness. We develop methods to analyze the nature and dynamics of these network patterns, give a statistical description of defects and fluctuations around them, and elucidate the transitions between different patterns. Results and methods presented can be applied to a variety of problems in different fields and a broad class of graphs. Aiming chiefly at the modeling of functional networks of interacting antibodies and B cells of the immune system (idiotypic networks), we focus on a class of graphs constructed by bit chains. The biological relevance of the patterns and possible operational modes of idiotypic networks are discussed.

  11. Recognition of Probe Ptolemaic Graphs

    NASA Astrophysics Data System (ADS)

    Chang, Maw-Shang; Hung, Ling-Ju

    Let G denote a graph class. An undirected graph G is called a probe G graph if one can make G a graph in G by adding edges between vertices in some independent set of G. By definition graph class G is a subclass of probe G graphs. Ptolemaic graphs are chordal and induced gem free. They form a subclass of both chordal graphs and distance-hereditary graphs. Many problems NP-hard on chordal graphs can be solved in polynomial time on ptolemaic graphs. We proposed an O(nm)-time algorithm to recognize probe ptolemaic graphs where n and m are the numbers of vertices and edges of the input graph respectively.

  12. Graph Generator Survey

    SciTech Connect

    Lothian, Josh; Powers, Sarah S; Sullivan, Blair D; Baker, Matthew B; Schrock, Jonathan; Poole, Stephen W

    2013-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allowed the emulation of dierent application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report examines existing synthetic graph generator implementations in preparation for further study on the properties of their generated synthetic graphs.

  13. Online dynamic graph drawing.

    PubMed

    Frishman, Yaniv; Tal, Ayellet

    2008-01-01

    This paper presents an algorithm for drawing a sequence of graphs online. The algorithm strives to maintain the global structure of the graph and thus the user's mental map, while allowing arbitrary modifications between consecutive layouts. The algorithm works online and uses various execution culling methods in order to reduce the layout time and handle large dynamic graphs. Techniques for representing graphs on the GPU allow a speedup by a factor of up to 17 compared to the CPU implementation. The scalability of the algorithm across GPU generations is demonstrated. Applications of the algorithm to the visualization of discussion threads in Internet sites and to the visualization of social networks are provided.

  14. A Random Network Model of Electrical Conduction in Hydrous Rock

    NASA Astrophysics Data System (ADS)

    Fujita, K.; Seki, M.; Katsura, T.; Ichiki, M.

    2011-12-01

    To evaluate the variation in conductivity of hydrous rock during the dehydration, it is essential to comprehend the mechanism of electrical conduction network in rock. In the recent past, several attempts have been made to demonstrate the mechanism of electrical conduction network in hydrous rock. However, realistic conduction mechanism within the crustal rock and mineral is unknown and relevant theories have not been successful. The aim of our study is to quantify the electrical conduction network in the rock and/or mineral. We developed a cell-type lattice network model to evaluate the electrical conduction mechanism of fluid-mineral interaction. Using cell-type lattice model, we simulated the various electrical paths and connectivity in the rock and/or mineral sample. First, we assumed a network model consists of 100 by 100 elementary cells as matrix configuration. We also settled the current input and output layers at the edge of the lattice model. Second, we randomly generated and put the conductive and resistive cells using the scheme of Mersenne Twister. Third, we applied the current for this model and performed a great number of realization on each mineral distribution patterns explaining realistic conduction network model. Considering fractal dimensions, our model has been compared with images from Electron Probe Micro Analysis. To evaluate the distribution pattern of conductive and resistive cells quantitatively, we have determined fractal dimensions by box-counting method. Assessing the bulk conductivity change as a function of conductor ratio in the hydrous rock, the model has been examined successfully both against simulated data and experimental data.

  15. Potts-model formulation of the random resistor network

    NASA Astrophysics Data System (ADS)

    Harris, A. B.; Lubensky, T. C.

    1987-05-01

    The randomly diluted resistor network is formulated in terms of an n-replicated s-state Potts model with a spin-spin coupling constant J in the limit when first n, then s, and finally 1/J go to zero. This limit is discussed and to leading order in 1/J the generalized susceptibility is shown to reproduce the results of the accompanying paper where the resistor network is treated using the xy model. This Potts Hamiltonian is converted into a field theory by the usual Hubbard-Stratonovich transformation and thereby a renormalization-group treatment is developed to obtain the corrections to the critical exponents to first order in ɛ=6-d, where d is the spatial dimensionality. The recursion relations are shown to be the same as for the xy model. Their detailed analysis (given in the accompanying paper) gives the resistance crossover exponent as φ1=1+ɛ/42, and determines the critical exponent, t for the conductivity of the randomly diluted resistor network at concentrations, p, just above the percolation threshold: t=(d-2)ν+φ1, where ν is the critical exponent for the correlation length at the percolation threshold. These results correct previously accepted results giving φ=1 to all orders in ɛ. The new result for φ1 removes the paradox associated with the numerical result that t>1 for d=2, and also shows that the Alexander-Orbach conjecture, while numerically quite accurate, is not exact, since it disagrees with the ɛ expansion.

  16. Graphs, matrices, and the GraphBLAS: Seven good reasons

    DOE PAGES

    Kepner, Jeremy; Bader, David; Buluç, Aydın; ...

    2015-01-01

    The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implementmore » a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.« less

  17. Graphs, matrices, and the GraphBLAS: Seven good reasons

    SciTech Connect

    Kepner, Jeremy; Bader, David; Buluç, Aydın; Gilbert, John; Mattson, Timothy; Meyerhenke, Henning

    2015-01-01

    The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.

  18. Exactly constructing model of quantum mechanics with random environment

    SciTech Connect

    Gevorkyan, A. S.

    2010-02-15

    Dissipation and decoherence, interaction with the random media, continuous measurements and many other complicated problems of open quantum systems are a result of interaction of quantum system with the random environment. These problems mathematically are described in terms of complex probabilistic processes (CPP). Note that CPP satisfies the stochastic differential equation (SDE) of Langevin-Schroedinger(L-Sch)type, and is defined on the extended space R{sup 1} - R{sub {l_brace}{gamma}{r_brace}}, where R{sup 1} and R{sub {l_brace}{gamma}{r_brace}} are the Euclidean and the functional spaces, correspondingly. For simplicity, the model of 1D quantum harmonic oscillator (QHO) with the stochastic environment is considered. On the basis of orthogonal CPP, the method of stochastic density matrix (SDM) is developed. By S DM method, the thermodynamical potentials, such as the nonequilibrium entropy and the energy of the 'ground state' are constructed in a closed form. The expressions for uncertain relations and Wigner function depending on interaction's constant between 1D QHO and the environment are obtained.

  19. An exactly solvable model of random site-specific recombinations

    PubMed Central

    Wei, Yi; Koulakov, Alexei A.

    2017-01-01

    Cre-lox and other systems are used as genetic tools to control site-specific recombination (SSR) events in genomic DNA. If multiple recombination sites are organized in a compact cluster within the same genome, a series of random recombination events may generate substantial cell specific genomic diversity. This diversity is used, for example, to distinguish neurons in the brain of the same multicellular mosaic organism, within the brainbow approach to neuronal connectome. In this paper we study an exactly solvable statistical model for SSR operating on a cluster of recombination sites. We consider two types of recombination events: inversions and excisions. Both of these events are available in the Cre-lox system. We derive three properties of the sequences generated by multiple recombination events. First, we describe the set of sequences that can in principle be generated by multiple inversions operating on the given initial sequence. We call this description the ergodicity theorem. On the basis of this description we calculate the number of sequences that can be generated from an initial sequence. This number of sequences is experimentally testable. Second, we demonstrate that after a large number of random inversions every sequence that can be generated is generated with equal probability. Lastly, we derive the equations for the probability to find a sequence as a function of time in the limit when excisions are much less frequent than inversions, such as in shufflon sequences. PMID:23151958

  20. Hedonic travel cost and random utility models of recreation

    SciTech Connect

    Pendleton, L.; Mendelsohn, R.; Davis, E.W.

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  1. Graph Theory and the High School Student.

    ERIC Educational Resources Information Center

    Chartrand, Gary; Wall, Curtiss E.

    1980-01-01

    Graph theory is presented as a tool to instruct high school mathematics students. A variety of real world problems can be modeled which help students recognize the importance and difficulty of applying mathematics. (MP)

  2. Partitioning and modularity of graphs with arbitrary degree distribution

    NASA Astrophysics Data System (ADS)

    Reichardt, Jörg; Bornholdt, Stefan

    2007-07-01

    We solve the graph bipartitioning problem in dense graphs with arbitrary degree distribution using the replica method. We find the cut size to scale universally with ⟨k⟩ . In contrast, earlier results studying the problem in graphs with a Poissonian degree distribution had found a scaling with ⟨k⟩ [Fu and Anderson, J. Phys. A 19, 1605 (1986)]. Our results also generalize to the problem of q partitioning. They can be used to find the expected modularity Q [Newman and Girvan, Phys. Rev. E 69, 026113 (2004)] of random graphs and allow for the assessment of the statistical significance of the output of community detection algorithms.

  3. A Graph Search Heuristic for Shortest Distance Paths

    SciTech Connect

    Chow, E

    2005-03-24

    This paper presents a heuristic for guiding A* search for finding the shortest distance path between two vertices in a connected, undirected, and explicitly stored graph. The heuristic requires a small amount of data to be stored at each vertex. The heuristic has application to quickly detecting relationships between two vertices in a large information or knowledge network. We compare the performance of this heuristic with breadth-first search on graphs with various topological properties. The results show that one or more orders of magnitude improvement in the number of vertices expanded is possible for large graphs, including Poisson random graphs.

  4. Efficient Ab initio Modeling of Random Multicomponent Alloys.

    PubMed

    Jiang, Chao; Uberuaga, Blas P

    2016-03-11

    We present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multicomponent alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high-entropy alloy chemistries. The SSOS method developed here can be broadly useful for the rapid computational design of multicomponent materials, especially those with a large number of alloying elements, a challenging problem for other approaches.

  5. Phase Transitions and Equilibrium Measures in Random Matrix Models

    NASA Astrophysics Data System (ADS)

    Martínez-Finkelshtein, A.; Orive, R.; Rakhmanov, E. A.

    2015-02-01

    The paper is devoted to a study of phase transitions in the Hermitian random matrix models with a polynomial potential. In an alternative equivalent language, we study families of equilibrium measures on the real line in a polynomial external field. The total mass of the measure is considered as the main parameter, which may be interpreted also either as temperature or time. Our main tools are differentiation formulas with respect to the parameters of the problem, and a representation of the equilibrium potential in terms of a hyperelliptic integral. Using this combination we introduce and investigate a dynamical system (system of ODEs) describing the evolution of families of equilibrium measures. On this basis we are able to systematically derive a number of new results on phase transitions, such as the local behavior of the system at all kinds of phase transitions, as well as to review a number of known ones.

  6. Local random potentials of high differentiability to model the Landscape

    SciTech Connect

    Battefeld, T.; Modi, C.

    2015-03-09

    We generate random functions locally via a novel generalization of Dyson Brownian motion, such that the functions are in a desired differentiability class C{sup k}, while ensuring that the Hessian is a member of the Gaussian orthogonal ensemble (other ensembles might be chosen if desired). Potentials in such higher differentiability classes (k≥2) are required/desirable to model string theoretical landscapes, for instance to compute cosmological perturbations (e.g., k=2 for the power-spectrum) or to search for minima (e.g., suitable de Sitter vacua for our universe). Since potentials are created locally, numerical studies become feasible even if the dimension of field space is large (D∼100). In addition to the theoretical prescription, we provide some numerical examples to highlight properties of such potentials; concrete cosmological applications will be discussed in companion publications.

  7. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  8. Continuous Time Group Discovery in Dynamic Graphs

    SciTech Connect

    Miller, K; Eliassi-Rad, T

    2010-11-04

    With the rise in availability and importance of graphs and networks, it has become increasingly important to have good models to describe their behavior. While much work has focused on modeling static graphs, we focus on group discovery in dynamic graphs. We adapt a dynamic extension of Latent Dirichlet Allocation to this task and demonstrate good performance on two datasets. Modeling relational data has become increasingly important in recent years. Much work has focused on static graphs - that is fixed graphs at a single point in time. Here we focus on the problem of modeling dynamic (i.e. time-evolving) graphs. We propose a scalable Bayesian approach for community discovery in dynamic graphs. Our approach is based on extensions of Latent Dirichlet Allocation (LDA). LDA is a latent variable model for topic modeling in text corpora. It was extended to deal with topic changes in discrete time and later in continuous time. These models were referred to as the discrete Dynamic Topic Model (dDTM) and the continuous Dynamic Topic Model (cDTM), respectively. When adapting these models to graphs, we take our inspiration from LDA-G and SSN-LDA, applications of LDA to static graphs that have been shown to effectively factor out community structure to explain link patterns in graphs. In this paper, we demonstrate how to adapt and apply the cDTM to the task of finding communities in dynamic networks. We use link prediction to measure the quality of the discovered community structure and apply it to two different relational datasets - DBLP author-keyword and CAIDA autonomous systems relationships. We also discuss a parallel implementation of this approach using Hadoop. In Section 2, we review LDA and LDA-G. In Section 3, we review the cDTM and introduce cDTMG, its adaptation to modeling dynamic graphs. We discuss inference for the cDTM-G and details of our parallel implementation in Section 4 and present its performance on two datasets in Section 5 before concluding in

  9. ACTIVITIES: Graphs and Games

    ERIC Educational Resources Information Center

    Hirsch, Christian R.

    1975-01-01

    Using a set of worksheets, students will discover and apply Euler's formula regarding connected planar graphs and play and analyze the game of Sprouts. One sheet leads to the discovery of Euler's formula; another concerns traversability of a graph; another gives an example and a game involving these ideas. (Author/KM)

  10. Real World Graph Connectivity

    ERIC Educational Resources Information Center

    Lind, Joy; Narayan, Darren

    2009-01-01

    We present the topic of graph connectivity along with a famous theorem of Menger in the real-world setting of the national computer network infrastructure of "National LambdaRail". We include a set of exercises where students reinforce their understanding of graph connectivity by analysing the "National LambdaRail" network. Finally, we give…

  11. Reflections on "The Graph"

    ERIC Educational Resources Information Center

    Petrosino, Anthony

    2012-01-01

    This article responds to arguments by Skidmore and Thompson (this issue of "Educational Researcher") that a graph published more than 10 years ago was erroneously reproduced and "gratuitously damaged" perceptions of the quality of education research. After describing the purpose of the original graph, the author counters assertions that the graph…

  12. Walking Out Graphs

    ERIC Educational Resources Information Center

    Shen, Ji

    2009-01-01

    In the Walking Out Graphs Lesson described here, students experience several types of representations used to describe motion, including words, sentences, equations, graphs, data tables, and actions. The most important theme of this lesson is that students have to understand the consistency among these representations and form the habit of…

  13. Graphing from Everyday Experience.

    ERIC Educational Resources Information Center

    Carraher, David; Schliemann, Analucia; Nemirousky, Ricardo

    1995-01-01

    Discusses the importance of teaching grounded in the everyday experiences and concerns of the learners. Studies how people with limited school experience can understand graphs and concludes that individuals with limited academic education can clarify the role of everyday experiences in learning about graphs. (ASK)

  14. Exploring Graphs: WYSIWYG.

    ERIC Educational Resources Information Center

    Johnson, Millie

    1997-01-01

    Graphs from media sources and questions developed from them can be used in the middle school mathematics classroom. Graphs depict storage temperature on a milk carton; air pressure measurements on a package of shock absorbers; sleep-wake patterns of an infant; a dog's breathing patterns; and the angle, velocity, and radius of a leaning bicyclist…

  15. Making "Photo" Graphs

    ERIC Educational Resources Information Center

    Doto, Julianne; Golbeck, Susan

    2007-01-01

    Collecting data and analyzing the results of experiments is difficult for children. The authors found a surprising way to help their third graders make graphs and draw conclusions from their data: digital photographs. The pictures bridged the gap between an abstract graph and the plants it represented. With the support of the photos, students…

  16. Communication Graph Generator for Parallel Programs

    SciTech Connect

    2014-04-08

    Graphator is a collection of relatively simple sequential programs that generate communication graphs/matrices for commonly occurring patterns in parallel programs. Currently, there is support for five communication patterns: two-dimensional 4-point stencil, four-dimensional 8-point stencil, all-to-alls over sub-communicators, random near-neighbor communication, and near-neighbor communication.

  17. Evolution in random fitness landscapes: the infinite sites model

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2008-04-01

    We consider the evolution of an asexually reproducing population in an uncorrelated random fitness landscape in the limit of infinite genome size, which implies that each mutation generates a new fitness value drawn from a probability distribution g(w). This is the finite population version of Kingman's house of cards model (Kingman 1978 J. Appl. Probab. 15 1). In contrast to Kingman's work, the focus here is on unbounded distributions g(w) which lead to an indefinite growth of the population fitness. The model is solved analytically in the limit of infinite population size N \\to \\infty and simulated numerically for finite N. When the genome-wide mutation probability U is small, the long-time behavior of the model reduces to a point process of fixation events, which is referred to as a diluted record process (DRP). The DRP is similar to the standard record process except that a new record candidate (a number that exceeds all previous entries in the sequence) is accepted only with a certain probability that depends on the values of the current record and the candidate. We develop a systematic analytic approximation scheme for the DRP. At finite U the fitness frequency distribution of the population decomposes into a stationary part due to mutations and a traveling wave component due to selection, which is shown to imply a reduction of the mean fitness by a factor of 1-U compared to the U \\to 0 limit.

  18. Random walk models of worker sorting in ant colonies.

    PubMed

    Sendova-Franks, Ana B; Van Lent, Jan

    2002-07-21

    Sorting can be an important mechanism for the transfer of information from one level of biological organization to another. Here we study the algorithm underlying worker sorting in Leptothorax ant colonies. Worker sorting is related to task allocation and therefore to the adaptive advantages associated with an efficient system for the division of labour in ant colonies. We considered four spatially explicit individual-based models founded on two-dimensional correlated random walk. Our aim was to establish whether sorting at the level of the worker population could occur with minimal assumptions about the behavioural algorithm of individual workers. The behaviour of an individual worker in the models could be summarized by the rule "move if you can, turn always". We assume that the turning angle of a worker is individually specific and negatively dependent on the magnitude of an internal parameter micro which could be regarded as a measure of individual experience or task specialization. All four models attained a level of worker sortedness that was compatible with results from experiments onLeptothorax ant colonies. We found that the presence of a sorting pivot, such as the nest wall or an attraction force towards the centre of the worker population, was crucial for sorting. We make a distinction between such pivots and templates and discuss the biological implications of their difference.

  19. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    SciTech Connect

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith; Nagarkar, Soonil; Ravi, Santosh; Raghavendra, Cauligi; Prasanna, Viktor

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.

  20. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.