Science.gov

Sample records for random graph model

  1. A random graph model of density thresholds in swarming cells.

    PubMed

    Jena, Siddhartha G

    2016-03-01

    Swarming behaviour is a type of bacterial motility that has been found to be dependent on reaching a local density threshold of cells. With this in mind, the process through which cell-to-cell interactions develop and how an assembly of cells reaches collective motility becomes increasingly important to understand. Additionally, populations of cells and organisms have been modelled through graphs to draw insightful conclusions about population dynamics on a spatial level. In the present study, we make use of analogous random graph structures to model the formation of large chain subgraphs, representing interactions between multiple cells, as a random graph Markov process. Using numerical simulations and analytical results on how quickly paths of certain lengths are reached in a random graph process, metrics for intercellular interaction dynamics at the swarm layer that may be experimentally evaluated are proposed. PMID:26893102

  2. Next nearest neighbour Ising models on random graphs

    NASA Astrophysics Data System (ADS)

    Raymond, Jack; Wong, K. Y. Michael

    2012-09-01

    This paper develops results for the next nearest neighbour Ising model on random graphs. Besides being an essential ingredient in classic models for frustrated systems, second neighbour interactions arise naturally in several applications, such as the colour diversity problem and graphical games. We demonstrate ensembles of random graphs, including regular connectivity graphs, that have a periodic variation of free energy, with either the ratio of nearest to next nearest couplings, or the mean number of nearest neighbours. When the coupling ratio is integer then paramagnetic phases can be found at zero temperature. This is shown to be related to the locked or unlocked nature of the interactions. For anti-ferromagnetic couplings, spin glass phases are demonstrated at low temperature. The interaction structure is formulated as a factor graph, the solution on a tree is developed. The replica symmetric and energetic one-step replica symmetry breaking solution is developed using the cavity method. We calculate within these frameworks the phase diagram and demonstrate the existence of dynamical transitions at zero temperature for cases of anti-ferromagnetic coupling on regular and inhomogeneous random graphs.

  3. Bayesian Models of Graphs, Arrays and Other Exchangeable Random Structures.

    PubMed

    Orbanz, Peter; Roy, Daniel M

    2015-02-01

    The natural habitat of most Bayesian methods is data represented by exchangeable sequences of observations, for which de Finetti's theorem provides the theoretical foundation. Dirichlet process clustering, Gaussian process regression, and many other parametric and nonparametric Bayesian models fall within the remit of this framework; many problems arising in modern data analysis do not. This article provides an introduction to Bayesian models of graphs, matrices, and other data that can be modeled by random structures. We describe results in probability theory that generalize de Finetti's theorem to such data and discuss their relevance to nonparametric Bayesian modeling. With the basic ideas in place, we survey example models available in the literature; applications of such models include collaborative filtering, link prediction, and graph and network analysis. We also highlight connections to recent developments in graph theory and probability, and sketch the more general mathematical foundation of Bayesian methods for other types of data beyond sequences and arrays. PMID:26353253

  4. Equitable random graphs

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.; Martin, Travis

    2014-11-01

    Random graph models have played a dominant role in the theoretical study of networked systems. The Poisson random graph of Erdős and Rényi, in particular, as well as the so-called configuration model, have served as the starting point for numerous calculations. In this paper we describe another large class of random graph models, which we call equitable random graphs and which are flexible enough to represent networks with diverse degree distributions and many nontrivial types of structure, including community structure, bipartite structure, degree correlations, stratification, and others, yet are exactly solvable for a wide range of properties in the limit of large graph size, including percolation properties, complete spectral density, and the behavior of homogeneous dynamical systems, such as coupled oscillators or epidemic models.

  5. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  6. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  7. Quenched Central Limit Theorems for the Ising Model on Random Graphs

    NASA Astrophysics Data System (ADS)

    Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa

    2015-09-01

    The main goal of the paper is to prove central limit theorems for the magnetization rescaled by for the Ising model on random graphs with N vertices. Both random quenched and averaged quenched measures are considered. We work in the uniqueness regime or and , where is the inverse temperature, is the critical inverse temperature and B is the external magnetic field. In the random quenched setting our results apply to general tree-like random graphs (as introduced by Dembo, Montanari and further studied by Dommers and the first and third author) and our proof follows that of Ellis in . For the averaged quenched setting, we specialize to two particular random graph models, namely the 2-regular configuration model and the configuration model with degrees 1 and 2. In these cases our proofs are based on explicit computations relying on the solution of the one dimensional Ising models.

  8. Statistical Inference for Valued-Edge Networks: The Generalized Exponential Random Graph Model

    PubMed Central

    Desmarais, Bruce A.; Cranmer, Skyler J.

    2012-01-01

    Across the sciences, the statistical analysis of networks is central to the production of knowledge on relational phenomena. Because of their ability to model the structural generation of networks based on both endogenous and exogenous factors, exponential random graph models are a ubiquitous means of analysis. However, they are limited by an inability to model networks with valued edges. We address this problem by introducing a class of generalized exponential random graph models capable of modeling networks whose edges have continuous values (bounded or unbounded), thus greatly expanding the scope of networks applied researchers can subject to statistical analysis. PMID:22276151

  9. Synchronizability of random rectangular graphs

    SciTech Connect

    Estrada, Ernesto Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  10. Random broadcast on random geometric graphs

    SciTech Connect

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  11. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler*

    PubMed Central

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2014-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency. PMID:24653788

  12. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  13. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  14. Multi-body quenched disordered X Y and p -clock models on random graphs

    NASA Astrophysics Data System (ADS)

    Marruzzo, Alessia; Leuzzi, Luca

    2016-03-01

    The X Y model with four-body quenched disordered interactions and its discrete p -clock proxy are studied on bipartite random graphs by means of the cavity method. The phase diagrams are determined from the ordered case to the spin-glass case. Dynamic, spinodal, and thermodynamic transition lines are identified by analyzing free energy, complexity, and tree reconstruction functions as temperature and disorder are changed. The study of the convergence of the p -clock model to the X Y model is performed down to temperature low enough to determine all relevant transition points for different node connectivity.

  15. Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models.

    PubMed

    Krivitsky, Pavel N; Handcock, Mark S; Morris, Martina

    2011-07-01

    Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424

  16. Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models

    PubMed Central

    Krivitsky, Pavel N.; Handcock, Mark S.; Morris, Martina

    2011-01-01

    Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424

  17. Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs

    NASA Astrophysics Data System (ADS)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa

    2016-11-01

    We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.

  18. Random Graphs Associated to Some Discrete and Continuous Time Preferential Attachment Models

    NASA Astrophysics Data System (ADS)

    Pachon, Angelica; Polito, Federico; Sacerdote, Laura

    2016-03-01

    We give a common description of Simon, Barabási-Albert, II-PA and Price growth models, by introducing suitable random graph processes with preferential attachment mechanisms. Through the II-PA model, we prove the conditions for which the asymptotic degree distribution of the Barabási-Albert model coincides with the asymptotic in-degree distribution of the Simon model. Furthermore, we show that when the number of vertices in the Simon model (with parameter α ) goes to infinity, a portion of them behave as a Yule model with parameters (λ ,β ) = (1-α ,1), and through this relation we explain why asymptotic properties of a random vertex in Simon model, coincide with the asymptotic properties of a random genus in Yule model. As a by-product of our analysis, we prove the explicit expression of the in-degree distribution for the II-PA model, given without proof in Newman (Contemp Phys 46:323-351, 2005). References to traditional and recent applications of the these models are also discussed.

  19. Role Analysis in Networks using Mixtures of Exponential Random Graph Models

    PubMed Central

    Salter-Townshend, Michael; Murphy, Thomas Brendan

    2014-01-01

    A novel and flexible framework for investigating the roles of actors within a network is introduced. Particular interest is in roles as defined by local network connectivity patterns, identified using the ego-networks extracted from the network. A mixture of Exponential-family Random Graph Models is developed for these ego-networks in order to cluster the nodes into roles. We refer to this model as the ego-ERGM. An Expectation-Maximization algorithm is developed to infer the unobserved cluster assignments and to estimate the mixture model parameters using a maximum pseudo-likelihood approximation. The flexibility and utility of the method are demonstrated on examples of simulated and real networks. PMID:26101465

  20. Random graph theory and neuropercolation for modeling brain oscillations at criticality.

    PubMed

    Kozma, Robert; Puljic, Marko

    2015-04-01

    Mathematical approaches are reviewed to interpret intermittent singular space-time dynamics observed in brain imaging experiments. The following aspects of brain dynamics are considered: nonlinear dynamics (chaos), phase transitions, and criticality. Probabilistic cellular automata and random graph models are described, which develop equations for the probability distributions of macroscopic state variables as an alternative to differential equations. The introduced modular neuropercolation model is motivated by the multilayer structure and dynamical properties of the cortex, and it describes critical brain oscillations, including background activity, narrow-band oscillations in excitatory-inhibitory populations, and broadband oscillations in the cortex. Input-induced and spontaneous transitions between states with large-scale synchrony and without synchrony exhibit brief episodes with long-range spatial correlations as observed in experiments.

  1. Robustness of random graphs based on graph spectra

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Barahona, Mauricio; Tan, Yue-jin; Deng, Hong-zhong

    2012-12-01

    It has been recently proposed that the robustness of complex networks can be efficiently characterized through the natural connectivity, a spectral property of the graph which corresponds to the average Estrada index. The natural connectivity corresponds to an average eigenvalue calculated from the graph spectrum and can also be interpreted as the Helmholtz free energy of the network. In this article, we explore the use of this index to characterize the robustness of Erdős-Rényi (ER) random graphs, random regular graphs, and regular ring lattices. We show both analytically and numerically that the natural connectivity of ER random graphs increases linearly with the average degree. It is also shown that ER random graphs are more robust than the corresponding random regular graphs with the same number of vertices and edges. However, the relative robustness of ER random graphs and regular ring lattices depends on the average degree and graph size: there is a critical graph size above which regular ring lattices are more robust than random graphs. We use our analytical results to derive this critical graph size as a function of the average degree.

  2. Advances in Exponential Random Graph (p*) Models Applied to a Large Social Network

    PubMed Central

    Goodreau, Steven M.

    2007-01-01

    Recent advances in statistical network analysis based on the family of exponential random graph (ERG) models have greatly improved our ability to conduct inference on dependence in large social networks (Snijders 2002, Pattison and Robins 2002, Handcock 2002, Handcock 2003, Snijders et al. 2006, Hunter et al. 2005, Goodreau et al. 2005, previous papers this issue). This paper applies advances in both model parameterizations and computational algorithms to an examination of the structure observed in an adolescent friendship network of 1,681 actors from the National Longitudinal Study of Adolescent Health (AddHealth). ERG models of social network structure are fit using the R package statnet, and their adequacy assessed through comparison of model predictions with the observed data for higher-order network statistics. For this friendship network, the commonly used model of Markov dependence leads to the problems of degeneracy discussed by Handcock (2002, 2003). On the other hand, model parameterizations introduced by Snijders et al (2006) and Hunter and Handcock (2006) avoid degeneracy and provide reasonable fit to the data. Degree-only models did a poor job of capturing observed network structure; those that did best included terms both for heterogeneous mixing on exogenous attributes (grade and self-reported race) as well as endogenous clustering. Networks simulated from this model were largely consistent with the observed network on multiple higher-order network statistics, including the number of triangles, the size of the largest component, the overall reachability, the distribution of geodesic distances, the degree distribution, and the shared partner distribution. The ability to fit such models to large datasets and to make inference about the underling processes generating the network represents a major advance in the field of statistical network analysis. PMID:18449326

  3. Limitations of individual causal models, causal graphs, and ignorability assumptions, as illustrated by random confounding and design unfaithfulness.

    PubMed

    Greenland, Sander; Mansournia, Mohammad Ali

    2015-10-01

    We describe how ordinary interpretations of causal models and causal graphs fail to capture important distinctions among ignorable allocation mechanisms for subject selection or allocation. We illustrate these limitations in the case of random confounding and designs that prevent such confounding. In many experimental designs individual treatment allocations are dependent, and explicit population models are needed to show this dependency. In particular, certain designs impose unfaithful covariate-treatment distributions to prevent random confounding, yet ordinary causal graphs cannot discriminate between these unconfounded designs and confounded studies. Causal models for populations are better suited for displaying these phenomena than are individual-level models, because they allow representation of allocation dependencies as well as outcome dependencies across individuals. Nonetheless, even with this extension, ordinary graphical models still fail to capture distinctions between hypothetical superpopulations (sampling distributions) and observed populations (actual distributions), although potential-outcome models can be adapted to show these distinctions and their consequences.

  4. Random graphs containing arbitrary distributions of subgraphs

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.

    2010-12-01

    Traditional random graph models of networks generate networks that are locally treelike, meaning that all local neighborhoods take the form of trees. In this respect such models are highly unrealistic, most real networks having strongly nontreelike neighborhoods that contain short loops, cliques, or other biconnected subgraphs. In this paper we propose and analyze a class of random graph models that incorporates general subgraphs, allowing for nontreelike neighborhoods while still remaining solvable for many fundamental network properties. Among other things we give solutions for the size of the giant component, the position of the phase transition at which the giant component appears, and percolation properties for both site and bond percolation on networks generated by the model.

  5. Component Evolution in General Random Intersection Graphs

    NASA Astrophysics Data System (ADS)

    Bradonjić, Milan; Hagberg, Aric; Hengartner, Nicolas W.; Percus, Allon G.

    Random intersection graphs (RIGs) are an important random structure with algorithmic applications in social networks, epidemic networks, blog readership, and wireless sensor networks. RIGs can be interpreted as a model for large randomly formed non-metric data sets. We analyze the component evolution in general RIGs, giving conditions on the existence and uniqueness of the giant component. Our techniques generalize existing methods for analysis of component evolution: we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts of the study of component evolution in Erdős-Rényi graphs. The major challenge comes from the underlying structure of RIGs, which involves both a set of nodes and a set of attributes, with different probabilities associated with each attribute.

  6. Index statistical properties of sparse random graphs

    NASA Astrophysics Data System (ADS)

    Metz, F. L.; Stariolo, Daniel A.

    2015-10-01

    Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.

  7. Consensus dynamics on random rectangular graphs

    NASA Astrophysics Data System (ADS)

    Estrada, Ernesto; Sheerin, Matthew

    2016-06-01

    A random rectangular graph (RRG) is a generalization of the random geometric graph (RGG) in which the nodes are embedded into a rectangle with side lengths a and b = 1 / a, instead of on a unit square [ 0 , 1 ] 2. Two nodes are then connected if and only if they are separated at a Euclidean distance smaller than or equal to a certain threshold radius r. When a = 1 the RRG is identical to the RGG. Here we apply the consensus dynamics model to the RRG. Our main result is a lower bound for the time of consensus, i.e., the time at which the network reaches a global consensus state. To prove this result we need first to find an upper bound for the algebraic connectivity of the RRG, i.e., the second smallest eigenvalue of the combinatorial Laplacian of the graph. This bound is based on a tight lower bound found for the graph diameter. Our results prove that as the rectangle in which the nodes are embedded becomes more elongated, the RRG becomes a 'large-world', i.e., the diameter grows to infinity, and a poorly-connected graph, i.e., the algebraic connectivity decays to zero. The main consequence of these findings is the proof that the time of consensus in RRGs grows to infinity as the rectangle becomes more elongated. In closing, consensus dynamics in RRGs strongly depend on the geometric characteristics of the embedding space, and reaching the consensus state becomes more difficult as the rectangle is more elongated.

  8. Random geometric graphs with general connection functions.

    PubMed

    Dettmann, Carl P; Georgiou, Orestis

    2016-03-01

    In the original (1961) Gilbert model of random geometric graphs, nodes are placed according to a Poisson point process, and links formed between those within a fixed range. Motivated by wireless ad hoc networks "soft" or "probabilistic" connection models have recently been introduced, involving a "connection function" H(r) that gives the probability that two nodes at distance r are linked (directly connect). In many applications (not only wireless networks), it is desirable that the graph is connected; that is, every node is linked to every other node in a multihop fashion. Here the connection probability of a dense network in a convex domain in two or three dimensions is expressed in terms of contributions from boundary components for a very general class of connection functions. It turns out that only a few quantities such as moments of the connection function appear. Good agreement is found with special cases from previous studies and with numerical simulations. PMID:27078372

  9. Random geometric graphs with general connection functions

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.; Georgiou, Orestis

    2016-03-01

    In the original (1961) Gilbert model of random geometric graphs, nodes are placed according to a Poisson point process, and links formed between those within a fixed range. Motivated by wireless ad hoc networks "soft" or "probabilistic" connection models have recently been introduced, involving a "connection function" H (r ) that gives the probability that two nodes at distance r are linked (directly connect). In many applications (not only wireless networks), it is desirable that the graph is connected; that is, every node is linked to every other node in a multihop fashion. Here the connection probability of a dense network in a convex domain in two or three dimensions is expressed in terms of contributions from boundary components for a very general class of connection functions. It turns out that only a few quantities such as moments of the connection function appear. Good agreement is found with special cases from previous studies and with numerical simulations.

  10. Area law for random graph states

    NASA Astrophysics Data System (ADS)

    Collins, Benoît; Nechita, Ion; Życzkowski, Karol

    2013-08-01

    Random pure states of multi-partite quantum systems, associated with arbitrary graphs, are investigated. Each vertex of the graph represents a generic interaction between subsystems, described by a random unitary matrix distributed according to the Haar measure, while each edge of the graph represents a bipartite, maximally entangled state. For any splitting of the graph into two parts we consider the corresponding partition of the quantum system and compute the average entropy of entanglement. First, in the special case where the partition does not cross any vertex of the graph, we show that the area law is satisfied exactly. In the general case, we show that the entropy of entanglement obeys an area law on average, this time with a correction term that depends on the topologies of the graph and of the partition. The results obtained are applied to the problem of distribution of quantum entanglement in a quantum network with prescribed topology.

  11. Replica methods for loopy sparse random graphs

    NASA Astrophysics Data System (ADS)

    Coolen, ACC

    2016-03-01

    I report on the development of a novel statistical mechanical formalism for the analysis of random graphs with many short loops, and processes on such graphs. The graphs are defined via maximum entropy ensembles, in which both the degrees (via hard constraints) and the adjacency matrix spectrum (via a soft constraint) are prescribed. The sum over graphs can be done analytically, using a replica formalism with complex replica dimensions. All known results for tree-like graphs are recovered in a suitable limit. For loopy graphs, the emerging theory has an appealing and intuitive structure, suggests how message passing algorithms should be adapted, and what is the structure of theories describing spin systems on loopy architectures. However, the formalism is still largely untested, and may require further adjustment and refinement. This paper is dedicated to the memory of our colleague and friend Jun-Ichi Inoue, with whom the author has had the great pleasure and privilege of collaborating.

  12. A Detailed Investigation into Near Degenerate Exponential Random Graphs

    NASA Astrophysics Data System (ADS)

    Yin, Mei

    2016-07-01

    The exponential family of random graphs has been a topic of continued research interest. Despite the relative simplicity, these models capture a variety of interesting features displayed by large-scale networks and allow us to better understand how phases transition between one another as tuning parameters vary. As the parameters cross certain lines, the model asymptotically transitions from a very sparse graph to a very dense graph, completely skipping all intermediate structures. We delve deeper into this near degenerate tendency and give an explicit characterization of the asymptotic graph structure as a function of the parameters.

  13. Component evolution in general random intersection graphs

    SciTech Connect

    Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick; Percus, Allon G

    2010-01-01

    We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes and different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.

  14. Searching for nodes in random graphs.

    PubMed

    Lancaster, David

    2011-11-01

    We consider the problem of searching for a node on a labeled random graph according to a greedy algorithm that selects a route to the desired node using metric information on the graph. Motivated by peer-to-peer networks two types of random graph are proposed with properties particularly amenable to this kind of algorithm. We derive equations for the probability that the search is successful and also study the number of hops required, finding both numerical and analytic evidence of a transition as the number of links is varied.

  15. Network Statistical Models for Language Learning Contexts: Exponential Random Graph Models and Willingness to Communicate

    ERIC Educational Resources Information Center

    Gallagher, H. Colin; Robins, Garry

    2015-01-01

    As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…

  16. A Simulation Study Comparing Epidemic Dynamics on Exponential Random Graph and Edge-Triangle Configuration Type Contact Network Models

    PubMed Central

    Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry

    2015-01-01

    We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701

  17. Quantum graphs and random-matrix theory

    NASA Astrophysics Data System (ADS)

    Pluhař, Z.; Weidenmüller, H. A.

    2015-07-01

    For simple connected graphs with incommensurate bond lengths and with unitary symmetry we prove the Bohigas-Giannoni-Schmit (BGS) conjecture in its most general form. Using supersymmetry and taking the limit of infinite graph size, we show that the generating function for every (P,Q) correlation function for both closed and open graphs coincides with the corresponding expression of random-matrix theory. We show that the classical Perron-Frobenius operator is bistochastic and possesses a single eigenvalue +1. In the quantum case that implies the existence of a zero (or massless) mode of the effective action. That mode causes universal fluctuation properties. Avoiding the saddle-point approximation we show that for graphs that are classically mixing (i.e. for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap) and that do not carry a special class of bound states, the zero mode dominates in the limit of infinite graph size.

  18. Clique percolation in random graphs

    NASA Astrophysics Data System (ADS)

    Li, Ming; Deng, Youjin; Wang, Bing-Hong

    2015-10-01

    As a generation of the classical percolation, clique percolation focuses on the connection of cliques in a graph, where the connection of two k cliques means that they share at least l graphs, which gives not only the exact solutions of the critical point, but also the corresponding order parameter. Based on this, we prove theoretically that the fraction ψ of cliques in the giant clique cluster always makes a continuous phase transition as the classical percolation. However, the fraction ϕ of vertices in the giant clique cluster for l >1 makes a step-function-like discontinuous phase transition in the thermodynamic limit and a continuous phase transition for l =1 . More interesting, our analysis shows that at the critical point, the order parameter ϕc for l >1 is neither 0 nor 1, but a constant depending on k and l . All these theoretical findings are in agreement with the simulation results, which give theoretical support and clarification for previous simulation studies of clique percolation.

  19. Scale-invariant geometric random graphs.

    PubMed

    Xie, Zheng; Rogers, Tim

    2016-03-01

    We introduce and analyze a class of growing geometric random graphs that are invariant under rescaling of space and time. Directed connections between nodes are drawn according to influence zones that depend on node position in space and time, mimicking the heterogeneity and increased specialization found in growing networks. Through calculations and numerical simulations we explore the consequences of scale invariance for geometric random graphs generated this way. Our analysis reveals a dichotomy between scale-free and Poisson distributions of in- and out-degree, the existence of a random number of hub nodes, high clustering, and unusual percolation behavior. These properties are similar to those of empirically observed web graphs. PMID:27078369

  20. Random walk on lattices: Graph-theoretic approach to simulating long-range diffusion-attachment growth models

    NASA Astrophysics Data System (ADS)

    Limkumnerd, Surachate

    2014-03-01

    Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.

  1. Topic Model for Graph Mining.

    PubMed

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng

    2015-12-01

    Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.

  2. Efficient broadcast on random geometric graphs

    SciTech Connect

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias; Sauerwald, Thomas

    2009-01-01

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the critical value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).

  3. General and exact approach to percolation on random graphs

    NASA Astrophysics Data System (ADS)

    Allard, Antoine; Hébert-Dufresne, Laurent; Young, Jean-Gabriel; Dubé, Louis J.

    2015-12-01

    We present a comprehensive and versatile theoretical framework to study site and bond percolation on clustered and correlated random graphs. Our contribution can be summarized in three main points. (i) We introduce a set of iterative equations that solve the exact distribution of the size and composition of components in finite-size quenched or random multitype graphs. (ii) We define a very general random graph ensemble that encompasses most of the models published to this day and also makes it possible to model structural properties not yet included in a theoretical framework. Site and bond percolation on this ensemble is solved exactly in the infinite-size limit using probability generating functions [i.e., the percolation threshold, the size, and the composition of the giant (extensive) and small components]. Several examples and applications are also provided. (iii) Our approach can be adapted to model interdependent graphs—whose most striking feature is the emergence of an extensive component via a discontinuous phase transition—in an equally general fashion. We show how a graph can successively undergo a continuous then a discontinuous phase transition, and preliminary results suggest that clustering increases the amplitude of the discontinuity at the transition.

  4. Random graphs with arbitrary degree distributions and their applications

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.; Strogatz, S. H.; Watts, D. J.

    2001-08-01

    Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the world-wide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.

  5. Random graph states, maximal flow and Fuss-Catalan distributions

    NASA Astrophysics Data System (ADS)

    Collins, Benoît; Nechita, Ion; Życzkowski, Karol

    2010-07-01

    For any graph consisting of k vertices and m edges we construct an ensemble of random pure quantum states which describe a system composed of 2m subsystems. Each edge of the graph represents a bipartite, maximally entangled state. Each vertex represents a random unitary matrix generated according to the Haar measure, which describes the coupling between subsystems. Dividing all subsystems into two parts, one may study entanglement with respect to this partition. A general technique to derive an expression for the average entanglement entropy of random pure states associated with a given graph is presented. Our technique relies on Weingarten calculus and flow problems. We analyze the statistical properties of spectra of such random density matrices and show for which cases they are described by the free Poissonian (Marchenko-Pastur) distribution. We derive a discrete family of generalized, Fuss-Catalan distributions and explicitly construct graphs which lead to ensembles of random states characterized by these novel distributions of eigenvalues.

  6. A program generating homogeneous random graphs with given weights

    NASA Astrophysics Data System (ADS)

    Bogacz, L.; Burda, Z.; Janke, W.; Waclaw, B.

    2005-12-01

    We present a program package to generate homogeneous random graphs with probabilities prescribed by the user. The statistical weight of a labeled graph α is given in the form W(α)=∏i=1Np(q), where p(q) is an arbitrary user function and q are the degrees of the graph nodes. The program can be used to generate two types of graphs (simple graphs and pseudo-graphs) from three types of ensembles (micro-canonical, canonical and grand-canonical). Program summaryTitle of the program:GraphGen Catalogue identifier:ADWL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWL Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: PC, Alpha workstation Operating systems or monitors under which the program has been tested:Linux, Unix, MS Windows XP Programing language used:C Memory required to execute with typical data:300 k words for a graph with 1000 nodes and up to 50 000 links No. of bits in a word:32 No. of processor used:1 Has the code been vectorized or parallelized:No No. of lines in distributed program, including test data, etc.:2253 No. of bytes in distributed program, including test data, etc.:14 330 Distribution format:tar.gz Keywords:Random graphs, complex networks, Markov process, Monte Carlo method Nature of the problem:The program generates random graphs. The probabilities of graph occurrence are proportional to their statistical weight, dependent on node degrees defined by arbitrary distributions Method of solution:The starting graph is taken arbitrary and then a sequence of graphs is generated. Each graph is obtained from the previous one by means of a simple modification. The probability of accepting or rejecting the new graph results from a detailed balance condition realized as Metropolis algorithm. When the length of the generated Markov chain increases, the probabilities of graph occurrence approach the stationary distribution given by

  7. Motifs in triadic random graphs based on Steiner triple systems

    NASA Astrophysics Data System (ADS)

    Winkler, Marco; Reichardt, Jörg

    2013-08-01

    Conventionally, pairwise relationships between nodes are considered to be the fundamental building blocks of complex networks. However, over the last decade, the overabundance of certain subnetwork patterns, i.e., the so-called motifs, has attracted much attention. It has been hypothesized that these motifs, instead of links, serve as the building blocks of network structures. Although the relation between a network's topology and the general properties of the system, such as its function, its robustness against perturbations, or its efficiency in spreading information, is the central theme of network science, there is still a lack of sound generative models needed for testing the functional role of subgraph motifs. Our work aims to overcome this limitation. We employ the framework of exponential random graph models (ERGMs) to define models based on triadic substructures. The fact that only a small portion of triads can actually be set independently poses a challenge for the formulation of such models. To overcome this obstacle, we use Steiner triple systems (STSs). These are partitions of sets of nodes into pair-disjoint triads, which thus can be specified independently. Combining the concepts of ERGMs and STSs, we suggest generative models capable of generating ensembles of networks with nontrivial triadic Z-score profiles. Further, we discover inevitable correlations between the abundance of triad patterns, which occur solely for statistical reasons and need to be taken into account when discussing the functional implications of motif statistics. Moreover, we calculate the degree distributions of our triadic random graphs analytically.

  8. Parameter Tuning Patterns for Random Graph Coloring with Quantum Annealing

    PubMed Central

    Titiloye, Olawale; Crispin, Alan

    2012-01-01

    Quantum annealing is a combinatorial optimization technique inspired by quantum mechanics. Here we show that a spin model for the k-coloring of large dense random graphs can be field tuned so that its acceptance ratio diverges during Monte Carlo quantum annealing, until a ground state is reached. We also find that simulations exhibiting such a diverging acceptance ratio are generally more effective than those tuned to the more conventional pattern of a declining and/or stagnating acceptance ratio. This observation facilitates the discovery of solutions to several well-known benchmark k-coloring instances, some of which have been open for almost two decades. PMID:23166818

  9. The Lexical Restructuring Hypothesis and Graph Theoretic Analyses of Networks Based on Random Lexicons

    ERIC Educational Resources Information Center

    Gruenenfelder, Thomas M.; Pisoni, David B.

    2009-01-01

    Purpose: The mental lexicon of words used for spoken word recognition has been modeled as a complex network or graph. Do the characteristics of that graph reflect processes involved in its growth (M. S. Vitevitch, 2008) or simply the phonetic overlap between similar-sounding words? Method: Three pseudolexicons were generated by randomly selecting…

  10. Graph modeling systems and methods

    DOEpatents

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  11. Existence of the Harmonic Measure for Random Walks on Graphs and in Random Environments

    NASA Astrophysics Data System (ADS)

    Boivin, Daniel; Rau, Clément

    2013-01-01

    We give a sufficient condition for the existence of the harmonic measure from infinity of transient random walks on weighted graphs. In particular, this condition is verified by the random conductance model on ℤ d , d≥3, when the conductances are i.i.d. and the bonds with positive conductance percolate. The harmonic measure from infinity also exists for random walks on supercritical clusters of ℤ2. This is proved using results of Barlow (Ann. Probab. 32:3024-3084, 2004) and Barlow and Hambly (Electron. J. Probab. 14(1):1-27, 2009).

  12. An internet graph model based on trade-off optimization

    NASA Astrophysics Data System (ADS)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  13. Intergroup networks as random threshold graphs

    NASA Astrophysics Data System (ADS)

    Saha, Sudipta; Ganguly, Niloy; Mukherjee, Animesh; Krueger, Tyll

    2014-04-01

    Similar-minded people tend to form social groups. Due to pluralistic homophily as well as a sort of heterophily, people also participate in a wide variety of groups. Thus, these groups generally overlap with each other; an overlap between two groups can be characterized by the number of common members. These common members can play a crucial role in the transmission of information between the groups. As a step towards understanding the information dissemination, we perceive the system as a pruned intergroup network and show that it maps to a very basic graph theoretic concept known as a threshold graph. We analyze several structural properties of this network such as degree distribution, largest component size, edge density, and local clustering coefficient. We compare the theoretical predictions with the results obtained from several online social networks (LiveJournal, Flickr, YouTube) and find a good match.

  14. Opinion Dynamics and Influencing on Random Geometric Graphs

    PubMed Central

    Zhang, Weituo; Lim, Chjan C.; Korniss, G.; Szymanski, Boleslaw K.

    2014-01-01

    We investigate the two-word Naming Game on two-dimensional random geometric graphs. Studying this model advances our understanding of the spatial distribution and propagation of opinions in social dynamics. A main feature of this model is the spontaneous emergence of spatial structures called opinion domains which are geographic regions with clear boundaries within which all individuals share the same opinion. We provide the mean-field equation for the underlying dynamics and discuss several properties of the equation such as the stationary solutions and two-time-scale separation. For the evolution of the opinion domains we find that the opinion domain boundary propagates at a speed proportional to its curvature. Finally we investigate the impact of committed agents on opinion domains and find the scaling of consensus time. PMID:24993655

  15. Some features of the spread of epidemics and information on a random graph

    PubMed Central

    Durrett, Rick

    2010-01-01

    Random graphs are useful models of social and technological networks. To date, most of the research in this area has concerned geometric properties of the graphs. Here we focus on processes taking place on the network. In particular we are interested in how their behavior on networks differs from that in homogeneously mixing populations or on regular lattices of the type commonly used in ecological models. PMID:20167800

  16. The peculiar phase structure of random graph bisection

    SciTech Connect

    Percus, Allon G; Istrate, Gabriel; Goncalves, Bruno T; Sumi, Robert Z

    2008-01-01

    The mincut graph bisection problem involves partitioning the n vertices of a graph into disjoint subsets, each containing exactly n/2 vertices, while minimizing the number of 'cut' edges with an endpoint in each subset. When considered over sparse random graphs, the phase structure of the graph bisection problem displays certain familiar properties, but also some surprises. It is known that when the mean degree is below the critical value of 2 log 2, the cutsize is zero with high probability. We study how the minimum cutsize increases with mean degree above this critical threshold, finding a new analytical upper bound that improves considerably upon previous bounds. Combined with recent results on expander graphs, our bound suggests the unusual scenario that random graph bisection is replica symmetric up to and beyond the critical threshold, with a replica symmetry breaking transition possibly taking place above the threshold. An intriguing algorithmic consequence is that although the problem is NP-hard, we can find near-optimal cutsizes (whose ratio to the optimal value approaches 1 asymptotically) in polynomial time for typical instances near the phase transition.

  17. Muller's ratchet in random graphs and scale-free networks

    NASA Astrophysics Data System (ADS)

    Campos, Paulo R. A.; Combadão, Jaime; Dionisio, Francisco; Gordo, Isabel

    2006-10-01

    Muller’s ratchet is an evolutionary process that has been implicated in the extinction of asexual species, the evolution of mitochondria, the degeneration of the Y chromosome, the evolution of sex and recombination and the evolution of microbes. Here we study the speed of Muller’s ratchet in a population subdivided into many small subpopulations connected by migration, and distributed on a network. We compare the speed of the ratchet in two distinct types of topologies: scale free networks and random graphs. The difference between the topologies is noticeable when the average connectivity of the network and the migration rate is large. In this situation we observe that the ratchet clicks faster in scale free networks than in random graphs. So contrary to intuition, scale free networks are more prone to loss of genetic information than random graphs. On the other hand, we show that scale free networks are more robust to the random extinction than random graphs. Since these complex networks have been shown to describe well real-life systems, our results open a framework for studying the evolution of microbes and disease epidemics.

  18. Zero-one law for random subgraphs of some distance graphs with vertices in Z^n

    NASA Astrophysics Data System (ADS)

    Popova, S. N.

    2016-03-01

    The zero-one law for the model of random distance graphs with vertices in Z^n is studied. Sufficient conditions for a sequence of random distance graphs to obey the zero-one law are derived, as well as conditions under which it contains a subsequence obeying the zero-one law. Bibliography: 20 titles.

  19. Nonergodic Phases in Strongly Disordered Random Regular Graphs

    NASA Astrophysics Data System (ADS)

    Altshuler, B. L.; Cuevas, E.; Ioffe, L. B.; Kravtsov, V. E.

    2016-10-01

    We combine numerical diagonalization with semianalytical calculations to prove the existence of the intermediate nonergodic but delocalized phase in the Anderson model on disordered hierarchical lattices. We suggest a new generalized population dynamics that is able to detect the violation of ergodicity of the delocalized states within the Abou-Chakra, Anderson, and Thouless recursive scheme. This result is supplemented by statistics of random wave functions extracted from exact diagonalization of the Anderson model on ensemble of disordered random regular graphs (RRG) of N sites with the connectivity K =2 . By extrapolation of the results of both approaches to N →∞ we obtain the fractal dimensions D1(W ) and D2(W ) as well as the population dynamics exponent D (W ) with the accuracy sufficient to claim that they are nontrivial in the broad interval of disorder strength WE1 05 reveals a singularity in D1 ,2(W ) dependencies which provides clear evidence for the first order transition between the two delocalized phases on RRG at WE≈10.0 . We discuss the implications of these results for quantum and classical nonintegrable and many-body systems.

  20. Circular coloring of random graphs: statistical physics investigation

    NASA Astrophysics Data System (ADS)

    Schmidt, Christian; Guenther, Nils-Eric; Zdeborová, Lenka

    2016-08-01

    Circular coloring is a constraint satisfaction problem where colors are assigned to nodes in a graph in such a way that every pair of connected nodes has two consecutive colors (the first color being consecutive to the last). We study circular coloring of random graphs using the cavity method. We identify two very interesting properties of this problem. For sufficiently many color and sufficiently low temperature there is a spontaneous breaking of the circular symmetry between colors and a phase transition forwards a ferromagnet-like phase. Our second main result concerns 5-circular coloring of random 3-regular graphs. While this case is found colorable, we conclude that the description via one-step replica symmetry breaking is not sufficient. We observe that simulated annealing is very efficient to find proper colorings for this case. The 5-circular coloring of 3-regular random graphs thus provides a first known example of a problem where the ground state energy is known to be exactly zero yet the space of solutions probably requires a full-step replica symmetry breaking treatment.

  1. Generalized Random Sequential Adsorption on Erdős-Rényi Random Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2016-09-01

    We investigate random sequential adsorption (RSA) on a random graph via the following greedy algorithm: Order the n vertices at random, and sequentially declare each vertex either active or frozen, depending on some local rule in terms of the state of the neighboring vertices. The classical RSA rule declares a vertex active if none of its neighbors is, in which case the set of active nodes forms an independent set of the graph. We generalize this nearest-neighbor blocking rule in three ways and apply it to the Erdős-Rényi random graph. We consider these generalizations in the large-graph limit n→ ∞ and characterize the jamming constant, the limiting proportion of active vertices in the maximal greedy set.

  2. Birds of a Feather, or Friend of a Friend? Using Exponential Random Graph Models to Investigate Adolescent Social Networks

    PubMed Central

    GOODREAU, STEVEN M.; KITTS, JAMES A.; MORRIS, MARTINA

    2009-01-01

    In this article, we use newly developed statistical methods to examine the generative processes that give rise to widespread patterns in friendship networks. The methods incorporate both traditional demographic measures on individuals (age, sex, and race) and network measures for structural processes operating on individual, dyadic, and triadic levels. We apply the methods to adolescent friendship networks in 59 U.S. schools from the National Longitudinal Survey of Adolescent Health (Add Health). We model friendship formation as a selection process constrained by individuals’ sociality (propensity to make friends), selective mixing in dyads (friendships within race, grade, or sex categories are differentially likely relative to cross-category friendships), and closure in triads (a friend’s friends are more likely to become friends), given local population composition. Blacks are generally the most cohesive racial category, although when whites are in the minority, they display stronger selective mixing than do blacks when blacks are in the minority. Hispanics exhibit disassortative selective mixing under certain circumstances; in other cases, they exhibit assortative mixing but lack the higher-order cohesion common in other groups. Grade levels are always highly cohesive, while females form triangles more than males. We conclude with a discussion of how network analysis may contribute to our understanding of sociodemographic structure and the processes that create it. PMID:19348111

  3. Random geometric graph description of connectedness percolation in rod systems

    NASA Astrophysics Data System (ADS)

    Chatterjee, Avik P.; Grimaldi, Claudio

    2015-09-01

    The problem of continuum percolation in dispersions of rods is reformulated in terms of weighted random geometric graphs. Nodes (or sites or vertices) in the graph represent spatial locations occupied by the centers of the rods. The probability that an edge (or link) connects any randomly selected pair of nodes depends upon the rod volume fraction as well as the distribution over their sizes and shapes, and also upon quantities that characterize their state of dispersion (such as the orientational distribution function). We employ the observation that contributions from closed loops of connected rods are negligible in the limit of large aspect ratios to obtain percolation thresholds that are fully equivalent to those calculated within the second-virial approximation of the connectedness Ornstein-Zernike equation. Our formulation can account for effects due to interactions between the rods, and many-body features can be partially addressed by suitable choices for the edge probabilities.

  4. Horizontal visibility graphs: exact results for random time series.

    PubMed

    Luque, B; Lacasa, L; Ballesteros, F; Luque, J

    2009-10-01

    The visibility algorithm has been recently introduced as a mapping between time series and complex networks. This procedure allows us to apply methods of complex network theory for characterizing time series. In this work we present the horizontal visibility algorithm, a geometrically simpler and analytically solvable version of our former algorithm, focusing on the mapping of random series (series of independent identically distributed random variables). After presenting some properties of the algorithm, we present exact results on the topological properties of graphs associated with random series, namely, the degree distribution, the clustering coefficient, and the mean path length. We show that the horizontal visibility algorithm stands as a simple method to discriminate randomness in time series since any random series maps to a graph with an exponential degree distribution of the shape P(k)=(1/3)(2/3)(k-2), independent of the probability distribution from which the series was generated. Accordingly, visibility graphs with other P(k) are related to nonrandom series. Numerical simulations confirm the accuracy of the theorems for finite series. In a second part, we show that the method is able to distinguish chaotic series from independent and identically distributed (i.i.d.) theory, studying the following situations: (i) noise-free low-dimensional chaotic series, (ii) low-dimensional noisy chaotic series, even in the presence of large amounts of noise, and (iii) high-dimensional chaotic series (coupled map lattice), without needs for additional techniques such as surrogate data or noise reduction methods. Finally, heuristic arguments are given to explain the topological properties of chaotic series, and several sequences that are conjectured to be random are analyzed.

  5. Measuring Edge Importance: A Quantitative Analysis of the Stochastic Shielding Approximation for Random Processes on Graphs

    PubMed Central

    2014-01-01

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin–Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán’s approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process. PMID:24742077

  6. Computational Graph Theoretical Model of the Zebrafish Sensorimotor Pathway

    NASA Astrophysics Data System (ADS)

    Peterson, Joshua M.; Stobb, Michael; Mazzag, Bori; Gahtan, Ethan

    2011-11-01

    Mapping the detailed connectivity patterns of neural circuits is a central goal of neuroscience and has been the focus of extensive current research [4, 3]. The best quantitative approach to analyze the acquired data is still unclear but graph theory has been used with success [3, 1]. We present a graph theoretical model with vertices and edges representing neurons and synaptic connections, respectively. Our system is the zebrafish posterior lateral line sensorimotor pathway. The goal of our analysis is to elucidate mechanisms of information processing in this neural pathway by comparing the mathematical properties of its graph to those of other, previously described graphs. We create a zebrafish model based on currently known anatomical data. The degree distributions and small-world measures of this model is compared to small-world, random and 3-compartment random graphs of the same size (with over 2500 nodes and 160,000 connections). We find that the zebrafish graph shows small-worldness similar to other neural networks and does not have a scale-free distribution of connections.

  7. Unimodular lattice triangulations as small-world and scale-free random graphs

    NASA Astrophysics Data System (ADS)

    Krüger, B.; Schmidt, E. M.; Mecke, K.

    2015-02-01

    Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.

  8. Generic criticality of community structure in random graphs

    NASA Astrophysics Data System (ADS)

    Lipowski, Adam; Lipowska, Dorota

    2014-09-01

    We examine a community structure in random graphs of size n and link probability p /n determined with the Newman greedy optimization of modularity. Calculations show that for p <1 communities are nearly identical with clusters. For p =1 the average sizes of a community sav and of the giant community sg show a power-law increase sav˜nα' and sg˜nα. From numerical results we estimate α'≈0.26(1) and α ≈0.50(1) and using the probability distribution of sizes of communities we suggest that α'=α/2 should hold. For p >1 the community structure remains critical: (i) sav and sg have a power-law increase with α'≈α<1 and (ii) the probability distribution of sizes of communities is very broad and nearly flat for all sizes up to sg. For large p the modularity Q decays as Q˜p-0.55, which is intermediate between some previous estimations. To check the validity of the results, we also determine the community structure using another method, namely, a nongreedy optimization of modularity. Tests with some benchmark networks show that the method outperforms the greedy version. For random graphs, however, the characteristics of the community structure determined using both greedy and nongreedy optimizations are, within small statistical fluctuations, the same.

  9. Random subgraphs of Cayley graphs over P-groups

    SciTech Connect

    Reidys, C.M.

    1997-09-01

    In this paper the author studies the largest component of random induced subgraphs of Cayley graphs X{sub n} over a certain class of p-groups P{sub n}. Here P{sub n} consists of p-groups, G{sub n}, that have the properties: (i) G{sub n}/{Phi}(G{sub n}) {congruent} F{sub p}{sup n}, where {Phi}(G{sub n}) is the Frattini subgroup and (ii) {vert_bar}G{sub n}{vert_bar} {le}n{sup Kn}, K > 0. The author then takes minimal Cayley graphs X{sub n} = {Gamma}(G{sub n},S{prime}{sub n}), where S{prime}{sub n} = S{sub n} {union} S{sub n}{sup {minus}1}, and S{sub n} is a minimal G{sub n}-generating set. The random induced subgraphs, {Gamma}{sub n} of X{sub n} are produced by selecting G{sub n}-elements with independent probability {lambda}{sub n}. The subject of this paper is the analysis of the largest component of random induced subgraphs {Gamma}{sub n} < X{sub n}. The author`s main result is, that there exists a positive constant c > 0 such that for {lambda}{sub n} = c ln ({vert_bar}S{prime}{sub n}{vert_bar})/{vert_bar}S{prime}{sub n}{vert_bar} the largest component of random subgraphs of X{sub n} contains almost all vertices.

  10. Network Models in Class C on Arbitrary Graphs

    NASA Astrophysics Data System (ADS)

    Cardy, John

    2005-08-01

    We consider network models of quantum localisation in which a particle with a two-component wave function propagates through the nodes and along the edges of an arbitrary directed graph, subject to a random SU(2) rotation on each edge it traverses. The propagation through each node is specified by an arbitrary but fixed S-matrix. Such networks model localisation problems in class C of the classification of Altland and Zirnbauer [1], and, on suitable graphs, they model the spin quantum Hall transition. We extend the analyses of Gruzberg, Ludwig and Read [5] and of Beamond, Cardy and Chalker [2] to show that, on an arbitrary graph, the mean density of states and the mean conductance may be calculated in terms of observables of a classical history-dependent random walk on the same graph. The transition weights for this process are explicitly related to the elements of the S-matrices. They are correctly normalised but, on graphs with nodes of degree greater than 4, not necessarily non-negative (and therefore interpretable as probabilities) unless a sufficient number of them happen to vanish. Our methods use a supersymmetric path integral formulation of the problem which is completely finite and rigorous.

  11. Large Deviation Function for the Number of Eigenvalues of Sparse Random Graphs Inside an Interval.

    PubMed

    Metz, Fernando L; Pérez Castillo, Isaac

    2016-09-01

    We present a general method to obtain the exact rate function Ψ_{[a,b]}(k) controlling the large deviation probability Prob[I_{N}[a,b]=kN]≍e^{-NΨ_{[a,b]}(k)} that an N×N sparse random matrix has I_{N}[a,b]=kN eigenvalues inside the interval [a,b]. The method is applied to study the eigenvalue statistics in two distinct examples: (i) the shifted index number of eigenvalues for an ensemble of Erdös-Rényi graphs and (ii) the number of eigenvalues within a bounded region of the spectrum for the Anderson model on regular random graphs. A salient feature of the rate function in both cases is that, unlike rotationally invariant random matrices, it is asymmetric with respect to its minimum. The asymmetric character depends on the disorder in a way that is compatible with the distinct eigenvalue statistics corresponding to localized and delocalized eigenstates. The results also show that the level compressibility κ_{2}/κ_{1} for the Anderson model on a regular graph satisfies 0<κ_{2}/κ_{1}<1 in the bulk regime, in contrast with the behavior found in Gaussian random matrices. Our theoretical findings are thoroughly compared to numerical diagonalization in both cases, showing a reasonable good agreement. PMID:27636476

  12. Large Deviation Function for the Number of Eigenvalues of Sparse Random Graphs Inside an Interval

    NASA Astrophysics Data System (ADS)

    Metz, Fernando L.; Pérez Castillo, Isaac

    2016-09-01

    We present a general method to obtain the exact rate function Ψ[a ,b ](k ) controlling the large deviation probability Prob[IN[a ,b ]=k N ]≍e-N Ψ[a ,b ](k ) that an N ×N sparse random matrix has IN[a ,b ]=k N eigenvalues inside the interval [a ,b ]. The method is applied to study the eigenvalue statistics in two distinct examples: (i) the shifted index number of eigenvalues for an ensemble of Erdös-Rényi graphs and (ii) the number of eigenvalues within a bounded region of the spectrum for the Anderson model on regular random graphs. A salient feature of the rate function in both cases is that, unlike rotationally invariant random matrices, it is asymmetric with respect to its minimum. The asymmetric character depends on the disorder in a way that is compatible with the distinct eigenvalue statistics corresponding to localized and delocalized eigenstates. The results also show that the level compressibility κ2/κ1 for the Anderson model on a regular graph satisfies 0 <κ2/κ1<1 in the bulk regime, in contrast with the behavior found in Gaussian random matrices. Our theoretical findings are thoroughly compared to numerical diagonalization in both cases, showing a reasonable good agreement.

  13. Evolution of tag-based cooperation on Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.; Hadzibeganovic, Tarik; Stauffer, Dietrich

    2014-12-01

    Here, we study an agent-based model of the evolution of tag-mediated cooperation on Erdős-Rényi random graphs. In our model, agents with heritable phenotypic traits play pairwise Prisoner's Dilemma-like games and follow one of the four possible strategies: Ethnocentric, altruistic, egoistic and cosmopolitan. Ethnocentric and cosmopolitan strategies are conditional, i.e. their selection depends upon the shared phenotypic similarity among interacting agents. The remaining two strategies are always unconditional, meaning that egoists always defect while altruists always cooperate. Our simulations revealed that ethnocentrism can win in both early and later evolutionary stages on directed random graphs when reproduction of artificial agents was asexual; however, under the sexual mode of reproduction on a directed random graph, we found that altruists dominate initially for a rather short period of time, whereas ethnocentrics and egoists suppress other strategists and compete for dominance in the intermediate and later evolutionary stages. Among our results, we also find surprisingly regular oscillations which are not damped in the course of time even after half a million Monte Carlo steps. Unlike most previous studies, our findings highlight conditions under which ethnocentrism is less stable or suppressed by other competing strategies.

  14. A weak zero-one law for sequences of random distance graphs

    SciTech Connect

    Zhukovskii, Maksim E

    2012-07-31

    We study zero-one laws for properties of random distance graphs. Properties written in a first-order language are considered. For p(N) such that pN{sup {alpha}}{yields}{infinity} as N{yields}{infinity}, and (1-p)N{sup {alpha}} {yields} {infinity} as N {yields} {infinity} for any {alpha}>0, we succeed in refuting the law. In this connection, we consider a weak zero-one j-law. For this law, we obtain results for random distance graphs which are similar to the assertions concerning the classical zero-one law for random graphs. Bibliography: 18 titles.

  15. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    SciTech Connect

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.

  16. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGES

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  17. GENERAL: Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Salimi, S.; Jafarizadeh, M. A.

    2009-06-01

    In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.

  18. A Mathematical Analysis of the R-MAT Random Graph Generator

    SciTech Connect

    Groer, Christopher S; Sullivan, Blair D; Poole, Stephen W

    2011-01-01

    The R-MAT graph generator introduced by Chakrabarti, Faloutsos, and Zhan offers a simple, fast method for generating very large directed graphs. These properties have made it a popular choice as a method of generating graphs for objects of study in a variety of disciplines, from social network analysis to high performance computing. We analyze the graphs generated by R-MAT and model the generator in terms of occupancy problems in order to prove results about the degree distributions of these graphs. We prove that the limiting degree distributions can be expressed as a mixture of normal distributions, contradicting the widely held belief that R-MAT degree distributions exhibit the power law or scale free distribution observed in many real world graphs. Additionally, this paper offers an efficient computational technique for computing the exact degree distribution, as well as concise expressions for a number of properties of R-MAT graphs.

  19. Quantum decomposition of random walk on Cayley graph of finite group

    NASA Astrophysics Data System (ADS)

    Kang, Yuanbao

    2016-09-01

    In the paper, A quantum decomposition (QD, for short) of random walk on Cayley graph of finite group is introduced, which contains two cases. One is QD of quantum random walk operator (QRWO, for short), another is QD of Quantum random walk state (QRWS, for short). Using these findings, I finally obtain some applications for quantum random walk (QRW, for short), which are of interest in the study of QRW, highlighting the role played by QRWO and QRWS.

  20. Random Forest classification based on star graph topological indices for antioxidant proteins.

    PubMed

    Fernández-Blanco, Enrique; Aguiar-Pulido, Vanessa; Munteanu, Cristian Robert; Dorado, Julian

    2013-01-21

    Aging and life quality is an important research topic nowadays in areas such as life sciences, chemistry, pharmacology, etc. People live longer, and, thus, they want to spend that extra time with a better quality of life. At this regard, there exists a tiny subset of molecules in nature, named antioxidant proteins that may influence the aging process. However, testing every single protein in order to identify its properties is quite expensive and inefficient. For this reason, this work proposes a model, in which the primary structure of the protein is represented using complex network graphs that can be used to reduce the number of proteins to be tested for antioxidant biological activity. The graph obtained as a representation will help us describe the complex system by using topological indices. More specifically, in this work, Randić's Star Networks have been used as well as the associated indices, calculated with the S2SNet tool. In order to simulate the existing proportion of antioxidant proteins in nature, a dataset containing 1999 proteins, of which 324 are antioxidant proteins, was created. Using this data as input, Star Graph Topological Indices were calculated with the S2SNet tool. These indices were then used as input to several classification techniques. Among the techniques utilised, the Random Forest has shown the best performance, achieving a score of 94% correctly classified instances. Although the target class (antioxidant proteins) represents a tiny subset inside the dataset, the proposed model is able to achieve a percentage of 81.8% correctly classified instances for this class, with a precision of 81.3%.

  1. A formal definition of data flow graph models

    NASA Technical Reports Server (NTRS)

    Kavi, Krishna M.; Buckles, Bill P.; Bhat, U. Narayan

    1986-01-01

    In this paper, a new model for parallel computations and parallel computer systems that is based on data flow principles is presented. Uninterpreted data flow graphs can be used to model computer systems including data driven and parallel processors. A data flow graph is defined to be a bipartite graph with actors and links as the two vertex classes. Actors can be considered similar to transitions in Petri nets, and links similar to places. The nondeterministic nature of uninterpreted data flow graphs necessitates the derivation of liveness conditions.

  2. Reducing Redundancies in Reconfigurable Antenna Structures Using Graph Models

    SciTech Connect

    Costantine, Joseph; al-Saffar, Sinan; Christodoulou, Christos G.; Abdallah, Chaouki T.

    2010-04-23

    Many reconfigurable antennas have redundant components in their structures. In this paper we present an approach for reducing redundancies in reconfigurable antenna structures using graph models. We study reconfigurable antennas, which are grouped, categorized and modeled according to a set of proposed graph rules. Several examples are presented and discussed to demonstrate the validity of this new technique.

  3. Absolutely continuous spectrum implies ballistic transport for quantum particles in a random potential on tree graphs

    SciTech Connect

    Aizenman, Michael; Warzel, Simone

    2012-09-15

    We discuss the dynamical implications of the recent proof that for a quantum particle in a random potential on a regular tree graph absolutely continuous (ac) spectrum occurs non-perturbatively through rare fluctuation-enabled resonances. The main result is spelled in the title.

  4. Interpreting Unfamiliar Graphs: A Generative, Activity Theoretic Model

    ERIC Educational Resources Information Center

    Roth, Wolff-Michael; Lee, Yew Jin

    2004-01-01

    Research on graphing presents its results as if knowing and understanding were something stored in peoples' minds independent of the situation that they find themselves in. Thus, there are no models that situate interview responses to graphing tasks. How, then, we question, are the interview texts produced? How do respondents begin and end…

  5. Using graph approach for managing connectivity in integrative landscape modelling

    NASA Astrophysics Data System (ADS)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open

  6. Emergence of the giant weak component in directed random graphs with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Kryven, Ivan

    2016-07-01

    The weak component generalizes the idea of connected components to directed graphs. In this paper, an exact criterion for the existence of the giant weak component is derived for directed graphs with arbitrary bivariate degree distributions. In addition, we consider a random process for evolving directed graphs with bounded degrees. The bounds are not the same for different vertices but satisfy a predefined distribution. The analytic expression obtained for the evolving degree distribution is then combined with the weak-component criterion to obtain the exact time of the phase transition. The phase-transition time is obtained as a function of the distribution that bounds the degrees. Remarkably, when viewed from the step-polymerization formalism, the new results yield Flory-Stockmayer gelation theory and generalize it to a broader scope.

  7. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  8. Vulnerability of networks: Fractional percolation on random graphs

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2014-01-01

    We present a theoretical framework for understanding nonbinary, nonindependent percolation on networks with general degree distributions. The model incorporates a partially functional (PF) state of nodes so that both intensity and extensity of error are characterized. Two connected nodes in a PF state cannot sustain the load and therefore break their link. We give exact solutions for the percolation threshold, the fraction of giant cluster, and the mean size of small clusters. The robustness-fragility transition point for scale-free networks with a degree distribution pk∝k-α is identified to be α =3. The analysis reveals that scale-free networks are vulnerable to targeted attack at hubs: a more complete picture of their Achilles' heel turns out to be not only the hubs themselves but also the edges linking them together.

  9. Product disassembly scheduling using graph models

    NASA Astrophysics Data System (ADS)

    Puente Mendez, Santiago; Torres Medina, Fernando; Pomares Baeza, Jorge

    2002-02-01

    Disassembly problem is a current issue for industrial companies. Governments of different countries promote research in this field. This paper presents the following points. First a brief state of the art in disassembly planning. Next it exposes a solution for the disassembly problem of industrial products. It uses a combination between direct and indirect graph representation for the product, all components that have physical entity are considered as vertices of the graph. Edges of the graph represent the relationships between vertices. There are three different types of edges. First corresponds with accessibility and fastener restrictions. Second corresponds with direct relations between components without fasteners. Last one corresponds with contact relationships, which represent an indifferent choice of the vertices. Based on that representation the paper exposed a method to find the best sequence to disassemble a component. Costs of disassembling each component and of changing tool between each pair of vertices and different sequences of the disassembly are taken into consideration. This method consists in a function minimization defined in the graph domain. In the last point of the paper this method is tested with a remote control disassembly. This method gives a solution to the problem, if several solutions, with the same cost, exist then it gives all of them, and any one of these disassemble sequences could be used to achieve to the target component.

  10. A graph theory practice on transformed image: a random image steganography.

    PubMed

    Thanikaiselvan, V; Arulmozhivarman, P; Subashanthini, S; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients.

  11. A Graph Theory Practice on Transformed Image: A Random Image Steganography

    PubMed Central

    Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857

  12. Voter model on the two-clique graph

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki

    2014-07-01

    I examine the mean consensus time (i.e., exit time) of the voter model in the so-called two-clique graph. The two-clique graph is composed of two cliques interconnected by some links and considered as a toy model of networks with community structure or multilayer networks. I analytically show that, as the number of interclique links per node is varied, the mean consensus time experiences a crossover between a fast consensus regime [i.e., O (N)] and a slow consensus regime [i.e., O (N2)], where N is the number of nodes. The fast regime is consistent with the result for homogeneous well-mixed graphs such as the complete graph. The slow regime appears only when the entire network has O (1) interclique links. The present results suggest that the effect of community structure on the consensus time of the voter model is fairly limited.

  13. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing

    PubMed Central

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102

  14. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.

    PubMed

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102

  15. Corona graphs as a model of small-world networks

    NASA Astrophysics Data System (ADS)

    Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi

    2015-11-01

    We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.

  16. O( N) Random Tensor Models

    NASA Astrophysics Data System (ADS)

    Carrozza, Sylvain; Tanasa, Adrian

    2016-11-01

    We define in this paper a class of three-index tensor models, endowed with {O(N)^{⊗ 3}} invariance ( N being the size of the tensor). This allows to generate, via the usual QFT perturbative expansion, a class of Feynman tensor graphs which is strictly larger than the class of Feynman graphs of both the multi-orientable model (and hence of the colored model) and the U( N) invariant models. We first exhibit the existence of a large N expansion for such a model with general interactions. We then focus on the quartic model and we identify the leading and next-to-leading order (NLO) graphs of the large N expansion. Finally, we prove the existence of a critical regime and we compute the critical exponents, both at leading order and at NLO. This is achieved through the use of various analytic combinatorics techniques.

  17. A study of physician collaborations through social network and exponential random graph

    PubMed Central

    2013-01-01

    Background Physician collaboration, which evolves among physicians during the course of providing healthcare services to hospitalised patients, has been seen crucial to effective patient outcomes in healthcare organisations and hospitals. This study aims to explore physician collaborations using measures of social network analysis (SNA) and exponential random graph (ERG) model. Methods Based on the underlying assumption that collaborations evolve among physicians when they visit a common hospitalised patient, this study first proposes an approach to map collaboration network among physicians from the details of their visits to patients. This paper terms this network as physician collaboration network (PCN). Second, SNA measures of degree centralisation, betweenness centralisation and density are used to examine the impact of SNA measures on hospitalisation cost and readmission rate. As a control variable, the impact of patient age on the relation between network measures (i.e. degree centralisation, betweenness centralisation and density) and hospital outcome variables (i.e. hospitalisation cost and readmission rate) are also explored. Finally, ERG models are developed to identify micro-level structural properties of (i) high-cost versus low-cost PCN; and (ii) high-readmission rate versus low-readmission rate PCN. An electronic health insurance claim dataset of a very large Australian health insurance organisation is utilised to construct and explore PCN in this study. Results It is revealed that the density of PCN is positively correlated with hospitalisation cost and readmission rate. In contrast, betweenness centralisation is found negatively correlated with hospitalisation cost and readmission rate. Degree centralisation shows a negative correlation with readmission rate, but does not show any correlation with hospitalisation cost. Patient age does not have any impact for the relation of SNA measures with hospitalisation cost and hospital readmission rate. The

  18. Using resource graphs to model learning in physics.

    NASA Astrophysics Data System (ADS)

    Wittmann, Michael

    2007-04-01

    Physics education researchers have many valuable ways of describing student reasoning while learning physics. One can describe the correct physics and look at specific student difficulties, for example, though that doesn't quite address the issue of how the latter develops into the former. A recent model (building on work by A.A. diSessa and D. Hammer) is to use resource graphs, which are networks of connected, small-scale ideas that describe reasoning about a specific physics topic in a specific physics context. We can compare resource graphs before and after instruction to represent conceptual changes that occur during learning. The representation describes several well documented forms of conceptual change and suggests others. I will apply the resource graphs representation to describe reasoning about energy loss in quantum tunneling. I will end the talk with a brief discussion (in the context of Newton's Laws) of how a resource perspective affects our instructional choices.

  19. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  20. GraphCrunch 2: Software tool for network modeling, alignment and clustering

    PubMed Central

    2011-01-01

    Background Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. Results We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, Graph

  1. Discovery and Expansion of Gene Modules by Seeking Isolated Groups in a Random Graph Process

    PubMed Central

    Brumm, Jochen; Conibear, Elizabeth; Wasserman, Wyeth W.; Bryan, Jennifer

    2008-01-01

    Background A central problem in systems biology research is the identification and extension of biological modules–groups of genes or proteins participating in a common cellular process or physical complex. As a result, there is a persistent need for practical, principled methods to infer the modular organization of genes from genome-scale data. Results We introduce a novel approach for the identification of modules based on the persistence of isolated gene groups within an evolving graph process. First, the underlying genomic data is summarized in the form of ranked gene–gene relationships, thereby accommodating studies that quantify the relevant biological relationship directly or indirectly. Then, the observed gene–gene relationship ranks are viewed as the outcome of a random graph process and candidate modules are given by the identifiable subgraphs that arise during this process. An isolation index is computed for each module, which quantifies the statistical significance of its survival time. Conclusions The Miso (module isolation) method predicts gene modules from genomic data and the associated isolation index provides a module-specific measure of confidence. Improving on existing alternative, such as graph clustering and the global pruning of dendrograms, this index offers two intuitively appealing features: (1) the score is module-specific; and (2) different choices of threshold correlate logically with the resulting performance, i.e. a stringent cutoff yields high quality predictions, but low sensitivity. Through the analysis of yeast phenotype data, the Miso method is shown to outperform existing alternatives, in terms of the specificity and sensitivity of its predictions. PMID:18843375

  2. Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images.

    PubMed

    Ju, Wei; Xiang, Dehui; Xiang, Deihui; Zhang, Bin; Wang, Lirong; Kopriva, Ivica; Chen, Xinjian

    2015-12-01

    Accurate lung tumor delineation plays an important role in radiotherapy treatment planning. Since the lung tumor has poor boundary in positron emission tomography (PET) images and low contrast in computed tomography (CT) images, segmentation of tumor in the PET and CT images is a challenging task. In this paper, we effectively integrate the two modalities by making fully use of the superior contrast of PET images and superior spatial resolution of CT images. Random walk and graph cut method is integrated to solve the segmentation problem, in which random walk is utilized as an initialization tool to provide object seeds for graph cut segmentation on the PET and CT images. The co-segmentation problem is formulated as an energy minimization problem which is solved by max-flow/min-cut method. A graph, including two sub-graphs and a special link, is constructed, in which one sub-graph is for the PET and another is for CT, and the special link encodes a context term which penalizes the difference of the tumor segmentation on the two modalities. To fully utilize the characteristics of PET and CT images, a novel energy representation is devised. For the PET, a downhill cost and a 3D derivative cost are proposed. For the CT, a shape penalty cost is integrated into the energy function which helps to constrain the tumor region during the segmentation. We validate our algorithm on a data set which consists of 18 PET-CT images. The experimental results indicate that the proposed method is superior to the graph cut method solely using the PET or CT is more accurate compared with the random walk method, random walk co-segmentation method, and non-improved graph cut method.

  3. Model validation of simple-graph representations of metabolism

    PubMed Central

    Holme, Petter

    2009-01-01

    The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012

  4. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  5. Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach

    NASA Technical Reports Server (NTRS)

    Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.

    2006-01-01

    A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.

  6. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  7. Random diffusion model.

    PubMed

    Mazenko, Gene F

    2008-09-01

    We study the random diffusion model. This is a continuum model for a conserved scalar density field varphi driven by diffusive dynamics. The interesting feature of the dynamics is that the bare diffusion coefficient D is density dependent. In the simplest case, D=D[over ]+D_{1}deltavarphi , where D[over ] is the constant average diffusion constant. In the case where the driving effective Hamiltonian is quadratic, the model can be treated using perturbation theory in terms of the single nonlinear coupling D1 . We develop perturbation theory to fourth order in D1 . The are two ways of analyzing this perturbation theory. In one approach, developed by Kawasaki, at one-loop order one finds mode-coupling theory with an ergodic-nonergodic transition. An alternative more direct interpretation at one-loop order leads to a slowing down as the nonlinear coupling increases. Eventually one hits a critical coupling where the time decay becomes algebraic. Near this critical coupling a weak peak develops at a wave number well above the peak at q=0 associated with the conservation law. The width of this peak in Fourier space decreases with time and can be identified with a characteristic kinetic length which grows with a power law in time. For stronger coupling the system becomes metastable and then unstable. At two-loop order it is shown that the ergodic-nonergodic transition is not supported. It is demonstrated that the critical properties of the direct approach survive, going to higher order in perturbation theory.

  8. Some generalisations of linear-graph modelling for dynamic systems

    NASA Astrophysics Data System (ADS)

    de Silva, Clarence W.; Pourazadi, Shahram

    2013-11-01

    Proper modelling of a dynamic system can benefit analysis, simulation, design, evaluation and control of the system. The linear-graph (LG) approach is suitable for modelling lumped-parameter dynamic systems. By using the concepts of graph trees, it provides a graphical representation of the system, with a direct correspondence to the physical component topology. This paper systematically extends the application of LGs to multi-domain (mixed-domain or multi-physics) dynamic systems by presenting a unified way to represent different domains - mechanical, electrical, thermal and fluid. Preservation of the structural correspondence across domains is a particular advantage of LGs when modelling mixed-domain systems. The generalisation of Thevenin and Norton equivalent circuits to mixed-domain systems, using LGs, is presented. The structure of an LG model may follow a specific pattern. Vector LGs are introduced to take advantage of such patterns, giving a general LG representation for them. Through these vector LGs, the model representation becomes simpler and rather compact, both topologically and parametrically. A new single LG element is defined to facilitate the modelling of distributed-parameter (DP) systems. Examples are presented using multi-domain systems (a motion-control system and a flow-controlled pump), a multi-body mechanical system (robot manipulator) and DP systems (structural rods) to illustrate the application and advantages of the methodologies developed in the paper.

  9. Disease gene identification by using graph kernels and Markov random fields.

    PubMed

    Chen, BoLin; Li, Min; Wang, JianXin; Wu, Fang-Xiang

    2014-11-01

    Genes associated with similar diseases are often functionally related. This principle is largely supported by many biological data sources, such as disease phenotype similarities, protein complexes, protein-protein interactions, pathways and gene expression profiles. Integrating multiple types of biological data is an effective method to identify disease genes for many genetic diseases. To capture the gene-disease associations based on biological networks, a kernel-based MRF method is proposed by combining graph kernels and the Markov random field (MRF) method. In the proposed method, three kinds of kernels are employed to describe the overall relationships of vertices in five biological networks, respectively, and a novel weighted MRF method is developed to integrate those data. In addition, an improved Gibbs sampling procedure and a novel parameter estimation method are proposed to generate predictions from the kernel-based MRF method. Numerical experiments are carried out by integrating known gene-disease associations, protein complexes, protein-protein interactions, pathways and gene expression profiles. The proposed kernel-based MRF method is evaluated by the leave-one-out cross validation paradigm, achieving an AUC score of 0.771 when integrating all those biological data in our experiments, which indicates that our proposed method is very promising compared with many existing methods.

  10. Safety models incorporating graph theory based transit indicators.

    PubMed

    Quintero, Liliana; Sayed, Tarek; Wahba, Mohamed M

    2013-01-01

    There is a considerable need for tools to enable the evaluation of the safety of transit networks at the planning stage. One interesting approach for the planning of public transportation systems is the study of networks. Network techniques involve the analysis of systems by viewing them as a graph composed of a set of vertices (nodes) and edges (links). Once the transport system is visualized as a graph, various network properties can be evaluated based on the relationships between the network elements. Several indicators can be calculated including connectivity, coverage, directness and complexity, among others. The main objective of this study is to investigate the relationship between network-based transit indicators and safety. The study develops macro-level collision prediction models that explicitly incorporate transit physical and operational elements and transit network indicators as explanatory variables. Several macro-level (zonal) collision prediction models were developed using a generalized linear regression technique, assuming a negative binomial error structure. The models were grouped into four main themes: transit infrastructure, transit network topology, transit route design, and transit performance and operations. The safety models showed that collisions were significantly associated with transit network properties such as: connectivity, coverage, overlapping degree and the Local Index of Transit Availability. As well, the models showed a significant relationship between collisions and some transit physical and operational attributes such as the number of routes, frequency of routes, bus density, length of bus and 3+ priority lanes.

  11. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    SciTech Connect

    Chair, Noureddine

    2012-12-15

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: Black-Right-Pointing-Pointer We obtain exact formulas for the two-point resistance of the complete graph minus N edges. Black-Right-Pointing-Pointer We obtain also the total effective resistance of this graph. Black-Right-Pointing-Pointer We modified Schwatt's formula on trigonometrical power sum to suit our computations. Black-Right-Pointing-Pointer We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. Black-Right-Pointing-Pointer The first passage and mean first passage times of the random walks have exact expressions.

  12. Random sequential renormalization and agglomerative percolation in networks: Application to Erdös-Rényi and scale-free graphs

    NASA Astrophysics Data System (ADS)

    Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya

    2011-12-01

    We study the statistical behavior under random sequential renormalization (RSR) of several network models including Erdös-Rényi (ER) graphs, scale-free networks, and an annealed model related to ER graphs. In RSR the network is locally coarse grained by choosing at each renormalization step a node at random and joining it to all its neighbors. Compared to previous (quasi-)parallel renormalization methods [Song , Nature (London)NATUAS0028-083610.1038/nature03248 433, 392 (2005)], RSR allows a more fine-grained analysis of the renormalization group (RG) flow and unravels new features that were not discussed in the previous analyses. In particular, we find that all networks exhibit a second-order transition in their RG flow. This phase transition is associated with the emergence of a giant hub and can be viewed as a new variant of percolation, called agglomerative percolation. We claim that this transition exists also in previous graph renormalization schemes and explains some of the scaling behavior seen there. For critical trees it happens as N/N0→0 in the limit of large systems (where N0 is the initial size of the graph and N its size at a given RSR step). In contrast, it happens at finite N/N0 in sparse ER graphs and in the annealed model, while it happens for N/N0→1 on scale-free networks. Critical exponents seem to depend on the type of the graph but not on the average degree and obey usual scaling relations for percolation phenomena. For the annealed model they agree with the exponents obtained from a mean-field theory. At late times, the networks exhibit a starlike structure in agreement with the results of Radicchi [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.101.148701 101, 148701 (2008)]. While degree distributions are of main interest when regarding the scheme as network renormalization, mass distributions (which are more relevant when considering “supernodes” as clusters) are much easier to study using the fast Newman-Ziff algorithm for

  13. Spectral correlations of individual quantum graphs.

    PubMed

    Gnutzmann, Sven; Altland, Alexander

    2005-11-01

    We investigate the spectral properties of chaotic quantum graphs. We demonstrate that the energy-average over the spectrum of individual graphs can be traded for the functional average over a supersymmetric nonlinear -model action. This proves that spectral correlations of individual quantum graphs behave according to the predictions of Wigner-Dyson random matrix theory. We explore the stability of the universal random matrix behavior with regard to perturbations, and discuss the crossover between different types of symmetries.

  14. Spectral correlations of individual quantum graphs

    SciTech Connect

    Gnutzmann, Sven; Altland, Alexander

    2005-11-01

    We investigate the spectral properties of chaotic quantum graphs. We demonstrate that the energy-average over the spectrum of individual graphs can be traded for the functional average over a supersymmetric nonlinear {sigma}-model action. This proves that spectral correlations of individual quantum graphs behave according to the predictions of Wigner-Dyson random matrix theory. We explore the stability of the universal random matrix behavior with regard to perturbations, and discuss the crossover between different types of symmetries.

  15. Dimer-monomer model on the Towers of Hanoi graphs

    NASA Astrophysics Data System (ADS)

    Chen, Hanlin; Wu, Renfang; Huang, Guihua; Deng, Hanyuan

    2015-07-01

    The number of dimer-monomers (matchings) of a graph G is an important graph parameter in statistical physics. Following recent research, we study the asymptotic behavior of the number of dimer-monomers m(G) on the Towers of Hanoi graphs and another variation of the Sierpiński graphs which is similar to the Towers of Hanoi graphs, and derive the recursion relations for the numbers of dimer-monomers. Upper and lower bounds for the entropy per site, defined as μG = limv(G)→∞(lnm(G)/v(G)), where v(G) is the number of vertices in a graph G, on these Sierpiński graphs are derived in terms of the numbers at a certain stage. As the difference between these bounds converges quickly to zero as the calculated stage increases, the numerical value of the entropy can be evaluated with more than a hundred significant figures accuracy.

  16. Sequence design in lattice models by graph theoretical methods

    NASA Astrophysics Data System (ADS)

    Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.

    2001-01-01

    A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).

  17. Identification of human protein complexes from local sub-graphs of protein-protein interaction network based on random forest with topological structure features.

    PubMed

    Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Zhou, Xuan; Dai, Zong; Zou, Xiao-Yong

    2012-03-01

    In the post-genomic era, one of the most important and challenging tasks is to identify protein complexes and further elucidate its molecular mechanisms in specific biological processes. Previous computational approaches usually identify protein complexes from protein interaction network based on dense sub-graphs and incomplete priori information. Additionally, the computational approaches have little concern about the biological properties of proteins and there is no a common evaluation metric to evaluate the performance. So, it is necessary to construct novel method for identifying protein complexes and elucidating the function of protein complexes. In this study, a novel approach is proposed to identify protein complexes using random forest and topological structure. Each protein complex is represented by a graph of interactions, where descriptor of the protein primary structure is used to characterize biological properties of protein and vertex is weighted by the descriptor. The topological structure features are developed and used to characterize protein complexes. Random forest algorithm is utilized to build prediction model and identify protein complexes from local sub-graphs instead of dense sub-graphs. As a demonstration, the proposed approach is applied to protein interaction data in human, and the satisfied results are obtained with accuracy of 80.24%, sensitivity of 81.94%, specificity of 80.07%, and Matthew's correlation coefficient of 0.4087 in 10-fold cross-validation test. Some new protein complexes are identified, and analysis based on Gene Ontology shows that the complexes are likely to be true complexes and play important roles in the pathogenesis of some diseases. PCI-RFTS, a corresponding executable program for protein complexes identification, can be acquired freely on request from the authors.

  18. Semi-Markov Graph Dynamics

    PubMed Central

    Raberto, Marco; Rapallo, Fabio; Scalas, Enrico

    2011-01-01

    In this paper, we outline a model of graph (or network) dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs. PMID:21887245

  19. Random pinning glass model.

    PubMed

    Karmakar, Smarajit; Parisi, Giorgio

    2013-02-19

    Glass transition, in which viscosity of liquids increases dramatically upon decrease of temperature without any major change in structural properties, remains one of the most challenging problems in condensed matter physics despite tremendous research efforts in past decades. On the other hand, disordered freezing of spins in magnetic materials with decreasing temperature, the so-called "spin glass transition," is understood relatively better. A previously found similarity between some spin glass models and the structural glasses inspired development of theories of structural glasses based on the scenario of spin glass transition. This scenario, although it looks very appealing, is still far from being well established. One of the main differences between standard spin systems and molecular systems is the absence of quenched disorder and the presence of translational invariance: it often is assumed that this difference is not relevant, but this conjecture still needs to be established. The quantities, which are well-defined and characterized for spin models, are not easily calculable for molecular glasses because of the lack of quenched disorder that breaks the translational invariance in the system. Thus the characterization of the similarity between spin and the structural glass transition remains an elusive subject. In this study, we introduced a model structural glass with built-in quenched disorder that alleviates this main difference between the spin and molecular glasses, thereby helping us compare these two systems: the possibility of producing a good thermalization at rather low temperatures is one of the advantages of this model. PMID:23382186

  20. Randomized Item Response Theory Models

    ERIC Educational Resources Information Center

    Fox, Jean-Paul

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…

  1. A Poisson model for random multigraphs

    PubMed Central

    Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

    2010-01-01

    Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

  2. Random modelling of contagious diseases.

    PubMed

    Demongeot, J; Hansen, O; Hessami, H; Jannot, A S; Mintsa, J; Rachdi, M; Taramasco, C

    2013-03-01

    Modelling contagious diseases needs to include a mechanistic knowledge about contacts between hosts and pathogens as specific as possible, e.g., by incorporating in the model information about social networks through which the disease spreads. The unknown part concerning the contact mechanism can be modelled using a stochastic approach. For that purpose, we revisit SIR models by introducing first a microscopic stochastic version of the contacts between individuals of different populations (namely Susceptible, Infective and Recovering), then by adding a random perturbation in the vicinity of the endemic fixed point of the SIR model and eventually by introducing the definition of various types of random social networks. We propose as example of application to contagious diseases the HIV, and we show that a micro-simulation of individual based modelling (IBM) type can reproduce the current stable incidence of the HIV epidemic in a population of HIV-positive men having sex with men (MSM). PMID:23525763

  3. O(N) Random Tensor Models

    NASA Astrophysics Data System (ADS)

    Carrozza, Sylvain; Tanasa, Adrian

    2016-08-01

    We define in this paper a class of three-index tensor models, endowed with {O(N)^{⊗ 3}} invariance (N being the size of the tensor). This allows to generate, via the usual QFT perturbative expansion, a class of Feynman tensor graphs which is strictly larger than the class of Feynman graphs of both the multi-orientable model (and hence of the colored model) and the U(N) invariant models. We first exhibit the existence of a large N expansion for such a model with general interactions. We then focus on the quartic model and we identify the leading and next-to-leading order (NLO) graphs of the large N expansion. Finally, we prove the existence of a critical regime and we compute the critical exponents, both at leading order and at NLO. This is achieved through the use of various analytic combinatorics techniques.

  4. Coloring geographical threshold graphs

    SciTech Connect

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  5. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  6. Random trinomial tree models and vanilla options

    NASA Astrophysics Data System (ADS)

    Ganikhodjaev, Nasir; Bayram, Kamola

    2013-09-01

    In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.

  7. Automated Modeling and Simulation Using the Bond Graph Method for the Aerospace Industry

    NASA Technical Reports Server (NTRS)

    Granda, Jose J.; Montgomery, Raymond C.

    2003-01-01

    Bond graph modeling was originally developed in the late 1950s by the late Prof. Henry M. Paynter of M.I.T. Prof. Paynter acted well before his time as the main advantage of his creation, other than the modeling insight that it provides and the ability of effectively dealing with Mechatronics, came into fruition only with the recent advent of modern computer technology and the tools derived as a result of it, including symbolic manipulation, MATLAB, and SIMULINK and the Computer Aided Modeling Program (CAMPG). Thus, only recently have these tools been available allowing one to fully utilize the advantages that the bond graph method has to offer. The purpose of this paper is to help fill the knowledge void concerning its use of bond graphs in the aerospace industry. The paper first presents simple examples to serve as a tutorial on bond graphs for those not familiar with the technique. The reader is given the basic understanding needed to appreciate the applications that follow. After that, several aerospace applications are developed such as modeling of an arresting system for aircraft carrier landings, suspension models used for landing gears and multibody dynamics. The paper presents also an update on NASA's progress in modeling the International Space Station (ISS) using bond graph techniques, and an advanced actuation system utilizing shape memory alloys. The later covers the Mechatronics advantages of the bond graph method, applications that simultaneously involves mechanical, hydraulic, thermal, and electrical subsystem modeling.

  8. A Comparison of Video Modeling, Text-Based Instruction, and No Instruction for Creating Multiple Baseline Graphs in Microsoft Excel

    ERIC Educational Resources Information Center

    Tyner, Bryan C.; Fienup, Daniel M.

    2015-01-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance.…

  9. A comparison of video modeling, text-based instruction, and no instruction for creating multiple baseline graphs in Microsoft Excel.

    PubMed

    Tyner, Bryan C; Fienup, Daniel M

    2015-09-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed.

  10. Universal Quantum Graphs

    NASA Astrophysics Data System (ADS)

    Pluhař, Z.; Weidenmüller, H. A.

    2014-04-01

    For time-reversal invariant graphs we prove the Bohigas-Giannoni-Schmit conjecture in its most general form: For graphs that are mixing in the classical limit, all spectral correlation functions coincide with those of the Gaussian orthogonal ensemble of random matrices. For open graphs, we derive the analogous identities for all S-matrix correlation functions.

  11. Evolutionary Games of Multiplayer Cooperation on Graphs.

    PubMed

    Peña, Jorge; Wu, Bin; Arranz, Jordi; Traulsen, Arne

    2016-08-01

    There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946

  12. Evolutionary Games of Multiplayer Cooperation on Graphs.

    PubMed

    Peña, Jorge; Wu, Bin; Arranz, Jordi; Traulsen, Arne

    2016-08-01

    There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering.

  13. Evolutionary Games of Multiplayer Cooperation on Graphs

    PubMed Central

    Arranz, Jordi; Traulsen, Arne

    2016-01-01

    There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946

  14. a Graph Based Model for the Detection of Tidal Channels Using Marked Point Processes

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2015-08-01

    In this paper we propose a new method for the automatic extraction of tidal channels in digital terrain models (DTM) using a sampling approach based on marked point processes. In our model, the tidal channel system is represented by an undirected, acyclic graph. The graph is iteratively generated and fitted to the data using stochastic optimization based on a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampler and simulated annealing. The nodes of the graph represent junction points of the channel system and the edges straight line segments with a certain width in between. In each sampling step, the current configuration of nodes and edges is modified. The changes are accepted or rejected depending on the probability density function for the configuration which evaluates the conformity of the current status with a pre-defined model for tidal channels. In this model we favour high DTM gradient magnitudes at the edge borders and penalize a graph configuration consisting of non-connected components, overlapping segments and edges with atypical intersection angles. We present the method of our graph based model and show results for lidar data, which serve of a proof of concept of our approach.

  15. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  16. Using graph theory to describe and model chromosome aberrations.

    PubMed

    Sachs, Rainer K; Arsuaga, Javier; Vázquez, Mariel; Hlatky, Lynn; Hahnfeldt, Philip

    2002-11-01

    A comprehensive description of chromosome aberrations is introduced that is suitable for all cytogenetic protocols (e.g. solid staining, banding, FISH, mFISH, SKY, bar coding) and for mathematical analyses. "Aberration multigraphs" systematically characterize and interrelate three basic aberration elements: (1) the initial configuration of chromosome breaks; (2) the exchange process, whose cycle structure helps to describe aberration complexity; and (3) the final configuration of rearranged chromosomes, which determines the observed pattern but may contain cryptic misrejoinings in addition. New aberration classification methods and a far-reaching generalization of mPAINT descriptors, applicable to any protocol, emerge. The difficult problem of trying to infer actual exchange processes from cytogenetically observed final patterns is analyzed using computer algorithms, adaptations of known theorems on cubic graphs, and some new graph-theoretical constructs. Results include the following: (1) For a painting protocol, unambiguously inferring the occurrence of a high-order cycle requires a corresponding number of different colors; (2) cycle structure can be computed by a simple trick directly from mPAINT descriptors if the initial configuration has no more than one break per homologue pair; and (3) higher-order cycles are more frequent than the obligate cycle structure specifies. Aberration multigraphs are a powerful new way to describe, classify and quantitatively analyze radiation-induced chromosome aberrations. They pinpoint (but do not eliminate) the problem that, with present cytogenetic techniques, one observed pattern corresponds to many possible initial configurations and exchange processes. PMID:12385633

  17. An Interactive Teaching System for Bond Graph Modeling and Simulation in Bioengineering

    ERIC Educational Resources Information Center

    Roman, Monica; Popescu, Dorin; Selisteanu, Dan

    2013-01-01

    The objective of the present work was to implement a teaching system useful in modeling and simulation of biotechnological processes. The interactive system is based on applications developed using 20-sim modeling and simulation software environment. A procedure for the simulation of bioprocesses modeled by bond graphs is proposed and simulators…

  18. Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests

    PubMed Central

    Li, Yilei; Zhu, Zhencai; Chen, Guoan

    2014-01-01

    The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS) for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests. PMID:24967428

  19. Earthquake sequencing: chimera states with Kuramoto model dynamics on directed graphs

    NASA Astrophysics Data System (ADS)

    Vasudevan, K.; Cavers, M.; Ware, A.

    2015-09-01

    Earthquake sequencing studies allow us to investigate empirical relationships among spatio-temporal parameters describing the complexity of earthquake properties. We have recently studied the relevance of Markov chain models to draw information from global earthquake catalogues. In these studies, we considered directed graphs as graph theoretic representations of the Markov chain model and analyzed their properties. Here, we look at earthquake sequencing itself as a directed graph. In general, earthquakes are occurrences resulting from significant stress interactions among faults. As a result, stress-field fluctuations evolve continuously. We propose that they are akin to the dynamics of the collective behavior of weakly coupled non-linear oscillators. Since mapping of global stress-field fluctuations in real time at all scales is an impossible task, we consider an earthquake zone as a proxy for a collection of weakly coupled oscillators, the dynamics of which would be appropriate for the ubiquitous Kuramoto model. In the present work, we apply the Kuramoto model with phase lag to the non-linear dynamics on a directed graph of a sequence of earthquakes. For directed graphs with certain properties, the Kuramoto model yields synchronization, and inclusion of non-local effects evokes the occurrence of chimera states or the co-existence of synchronous and asynchronous behavior of oscillators. In this paper, we show how we build the directed graphs derived from global seismicity data. Then, we present conditions under which chimera states could occur and, subsequently, point out the role of the Kuramoto model in understanding the evolution of synchronous and asynchronous regions. We surmise that one implication of the emergence of chimera states will lead to investigation of the present and other mathematical models in detail to generate global chimera-state maps similar to global seismicity maps for earthquake forecasting studies.

  20. Hierarchical graphs for better annotations of rule-based models of biochemical systems

    SciTech Connect

    Hu, Bin; Hlavacek, William

    2009-01-01

    In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of a molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.

  1. Connections between the Sznajd model with general confidence rules and graph theory.

    PubMed

    Timpanaro, André M; Prado, Carmen P C

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  2. Connections between the Sznajd model with general confidence rules and graph theory

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  3. A graph isomorphism algorithm using signatures computed via quantum walk search model

    NASA Astrophysics Data System (ADS)

    Wang, Huiquan; Wu, Junjie; Yang, Xuejun; Yi, Xun

    2015-03-01

    In this paper, we propose a new algorithm based on a quantum walk search model to distinguish strongly similar graphs. Our algorithm computes a signature for each graph via the quantum walk search model and uses signatures to distinguish non-isomorphic graphs. Our method is less complex than those of previous works. In addition, our algorithm can be extended by raising the signature levels. The higher the level adopted, the stronger the distinguishing ability and the higher the complexity of the algorithm. Our algorithm was tested with standard benchmarks from four databases. We note that the weakest signature at level 1 can distinguish all similar graphs, with a time complexity of O({{N}3.5}), which that outperforms the previous best work except when it comes to strongly regular graphs (SRGs). Once the signature is raised to level 3, all SRGs tested can be distinguished successfully. In this case, the time complexity is O({{N}5.5}), also better than the previous best work.

  4. Non-equilibrium critical properties of the Ising model on product graphs

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Corberi, Federico; Vezzani, Alessandro

    2010-12-01

    We study numerically the non-equilibrium critical properties of the Ising model defined on direct products of graphs, obtained from factor graphs without phase transition (Tc = 0). On this class of product graphs, the Ising model features a finite temperature phase transition, and we find a pattern of scaling behaviors analogous to the one known on regular lattices: observables take a scaling form in terms of a function L(t) of time, with the meaning of a growing length inside which a coherent fractal structure, the critical state, is progressively formed. Computing universal quantities, such as the critical exponents and the limiting fluctuation-dissipation ratio X_\\infty , allows us to comment on the possibility to extend universality concepts to the critical behavior on inhomogeneous substrates.

  5. Ezekiel graphs

    SciTech Connect

    Simmons, G.J.

    1991-01-01

    In spite of the old adage that No finite sequence of symbols is random,'' there are many instances in which it is desirable to quantify how random'' a finite sequence is. Pseudorandom number generators and cryptographic key generators typically expand a short, randomly chosen, seed sequence into a much longer sequence which should appear random to anyone ignorant of the seed. Unique initiating signals chosen to minimize the likelihood of an accidental initiation of an important action should be random'' to lessen the chance of their natural occurrence, etc. Consequently, numerous tests for the randomness of finite sequences have been proposed. John Milnor argued that if a binary sequence is random then the fraction of 1's, r{sub 1}, should be very nearly 1/2 in it and in all of what he called its derivatives. Since every sequence has a unique derivative this defines a natural family of digraphs, G{sub n}, on 2{sup n} vertices in which vertices are labeled with n-bit binary sequences and an edge is directed from the vertex labeled with the sequence A to the vertex labeled with the sequence B if B is the derivative of A. Each component of G{sub n} is eventually cyclic. This paper is concerned with a special case in which the sequences in a cycle are all cyclic shifts of a single sequence -- hence the name of Ezekiel graphs. Surprising, there are Ezekiel graphs for which r{sub 1} is as close to 1/2 as is numerically possible, i.e., that satisfy Milnor's test for randomness as closely as it can be satisfied, even though the sequence of sequences are about as far from random as is conceivable. In this paper the existence and properties of Ezekiel sequences are investigated from an algebraic standpoint.

  6. Analysis of Business Connections Utilizing Theory of Topology of Random Graphs

    NASA Astrophysics Data System (ADS)

    Trelewicz, Jennifer Q.; Volovich, Igor V.

    2006-03-01

    A business ecosystem is a system that describes interactions between organizations. In this paper, we build a theoretical framework that defines a model which can be used to analyze the business ecosystem. The basic concepts within the framework are organizations, business connections, and market, that are all defined in the paper. Many researchers analyze the performance and structure of business using the workflow of the business. Our work in business connections answers a different set of questions, concerning the monetary value in the business ecosystem, rather than the task-interaction view that is provided by workflow analysis. We apply methods for analysis of the topology of complex networks, characterized by the concepts of small path length, clustering, and scale-free degree distributions. To model the dynamics of the business ecosystem we analyze the notion of the state of an organization at a given instant of time. We point out that the notion of state in this case is fundamentally different from the concept of state of the system which is used in classical or quantum physics. To describe the state of the organization at a given time one has to know the probability of payments to contracts which in fact depend on the future behavior of the agents on the market. Therefore methods of p-adic analysis are appropriate to explore such a behavior. Microeconomic and macroeconomic factors are indivisible and moreover the actual state of the organization depends on the future. In this framework some simple models are analyzed in detail. Company strategy can be influenced by analysis of models, which can provide a probabilistic understanding of the market, giving degrees of predictability.

  7. Sparsified-dynamics modeling of discrete point vortices with graph theory

    NASA Astrophysics Data System (ADS)

    Taira, Kunihiko; Nair, Aditya

    2014-11-01

    We utilize graph theory to derive a sparsified interaction-based model that captures unsteady point vortex dynamics. The present model builds upon the Biot-Savart law and keeps the number of vortices (graph nodes) intact and reduces the number of inter-vortex interactions (graph edges). We achieve this reduction in vortex interactions by spectral sparsification of graphs. This approach drastically reduces the computational cost to predict the dynamical behavior, sharing characteristics of reduced-order models. Sparse vortex dynamics are illustrated through an example of point vortex clusters interacting amongst themselves. We track the centroids of the individual vortex clusters to evaluate the error in bulk motion of the point vortices in the sparsified setup. To further improve the accuracy in predicting the nonlinear behavior of the vortices, resparsification strategies are employed for the sparsified interaction-based models. The model retains the nonlinearity of the interaction and also conserves the invariants of discrete vortex dynamics; namely the Hamiltonian, linear impulse, and angular impulse as well as circulation. Work supported by US Army Research Office (W911NF-14-1-0386) and US Air Force Office of Scientific Research (YIP: FA9550-13-1-0183).

  8. Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs

    NASA Astrophysics Data System (ADS)

    Hor, A.-H.; Jadidi, A.; Sohn, G.

    2016-06-01

    In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.

  9. Classification of EEG Single Trial Microstates Using Local Global Graphs and Discrete Hidden Markov Models.

    PubMed

    Michalopoulos, Kostas; Zervakis, Michalis; Deiber, Marie-Pierre; Bourbakis, Nikolaos

    2016-09-01

    We present a novel synergistic methodology for the spatio-temporal analysis of single Electroencephalogram (EEG) trials. This new methodology is based on the novel synergy of Local Global Graph (LG graph) to characterize define the structural features of the EEG topography as a global descriptor for robust comparison of dominant topographies (microstates) and Hidden Markov Models (HMM) to model the topographic sequence in a unique way. In particular, the LG graph descriptor defines similarity and distance measures that can be successfully used for the difficult comparison of the extracted LG graphs in the presence of noise. In addition, hidden states represent periods of stationary distribution of topographies that constitute the equivalent of the microstates in the model. The transitions between the different microstates and the formed syntactic patterns can reveal differences in the processing of the input stimulus between different pathologies. We train the HMM model to learn the transitions between the different microstates and express the syntactic patterns that appear in the single trials in a compact and efficient way. We applied this methodology in single trials consisting of normal subjects and patients with Progressive Mild Cognitive Impairment (PMCI) to discriminate these two groups. The classification results show that this approach is capable to efficiently discriminate between control and Progressive MCI single trials. Results indicate that HMMs provide physiologically meaningful results that can be used in the syntactic analysis of Event Related Potentials. PMID:27255799

  10. Graphing Reality

    NASA Astrophysics Data System (ADS)

    Beeken, Paul

    2014-11-01

    Graphing is an essential skill that forms the foundation of any physical science.1 Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations.2 Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary school instruction, the job of graphing skills falls heavily on physics teachers. By virtue of the nature of the topics we cover, it is our mission to develop this skill to the fine art that it is.

  11. Graphing Reality

    ERIC Educational Resources Information Center

    Beeken, Paul

    2014-01-01

    Graphing is an essential skill that forms the foundation of any physical science. Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations. Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary…

  12. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  13. A Spectral Graph Regression Model for Learning Brain Connectivity of Alzheimer’s Disease

    PubMed Central

    Hu, Chenhui; Cheng, Lin; Sepulcre, Jorge; Johnson, Keith A.; Fakhri, Georges E.; Lu, Yue M.; Li, Quanzheng

    2015-01-01

    Understanding network features of brain pathology is essential to reveal underpinnings of neurodegenerative diseases. In this paper, we introduce a novel graph regression model (GRM) for learning structural brain connectivity of Alzheimer's disease (AD) measured by amyloid-β deposits. The proposed GRM regards 11C-labeled Pittsburgh Compound-B (PiB) positron emission tomography (PET) imaging data as smooth signals defined on an unknown graph. This graph is then estimated through an optimization framework, which fits the graph to the data with an adjustable level of uniformity of the connection weights. Under the assumed data model, results based on simulated data illustrate that our approach can accurately reconstruct the underlying network, often with better reconstruction than those obtained by both sample correlation and ℓ1-regularized partial correlation estimation. Evaluations performed upon PiB-PET imaging data of 30 AD and 40 elderly normal control (NC) subjects demonstrate that the connectivity patterns revealed by the GRM are easy to interpret and consistent with known pathology. Moreover, the hubs of the reconstructed networks match the cortical hubs given by functional MRI. The discriminative network features including both global connectivity measurements and degree statistics of specific nodes discovered from the AD and NC amyloid-beta networks provide new potential biomarkers for preclinical and clinical AD. PMID:26024224

  14. Random energy model at complex temperatures

    PubMed

    Saakian

    2000-06-01

    The complete phase diagram of the random energy model is obtained for complex temperatures using the method proposed by Derrida. We find the density of zeroes for the statistical sum. Then the method is applied to the generalized random energy model. This allowed us to propose an analytical method for investigating zeroes of the statistical sum for finite-dimensional systems. PMID:11088286

  15. The role of reliability graph models in assuring dependable operation of complex hardware/software systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Davis, Gloria J.; Pedar, A.

    1991-01-01

    The complexity of computer systems currently being designed for critical applications in the scientific, commercial, and military arenas requires the development of new techniques for utilizing models of system behavior in order to assure 'ultra-dependability'. The complexity of these systems, such as Space Station Freedom and the Air Traffic Control System, stems from their highly integrated designs containing both hardware and software as critical components. Reliability graph models, such as fault trees and digraphs, are used frequently to model hardware systems. Their applicability for software systems has also been demonstrated for software safety analysis and the analysis of software fault tolerance. This paper discusses further uses of graph models in the design and implementation of fault management systems for safety critical applications.

  16. Random Effects Diagonal Metric Multidimensional Scaling Models.

    ERIC Educational Resources Information Center

    Clarkson, Douglas B.; Gonzalez, Richard

    2001-01-01

    Defines a random effects diagonal metric multidimensional scaling model, gives its computational algorithms, describes researchers' experiences with these algorithms, and provides an illustration of the use of the model and algorithms. (Author/SLD)

  17. Using a high-dimensional graph of semantic space to model relationships among words

    PubMed Central

    Jackson, Alice F.; Bolger, Donald J.

    2014-01-01

    The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD). PMID:24860525

  18. Parametric models for samples of random functions

    SciTech Connect

    Grigoriu, M.

    2015-09-15

    A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.

  19. International Space Station Centrifuge Rotor Models A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach

    NASA Technical Reports Server (NTRS)

    Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.

    2006-01-01

    The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.

  20. A bond graph approach to modeling the anuran vocal production system.

    PubMed

    Kime, Nicole M; Ryan, Michael J; Wilson, Preston S

    2013-06-01

    Air-driven vocal production systems such as those found in mammals, birds, and anurans (frogs and toads) combine pneumatic and mechanical elements in species-specific ways to produce a diversity of communication signals. This study uses bond graphs to model a generalized anuran vocal production system. Bond graphs allow an incremental approach to modeling dynamic physical systems involving different domains. Anurans provide an example of how signal diversity results from variation in the structure and behavior of vocal system elements. This paper first proposes a bond graph model of the integrated anuran vocal system as a framework for future study. It then presents a simulated submodel of the anuran sound source that produces sustained oscillations in vocal fold displacement and air flow through the larynx. The modeling approach illustrated here should prove of general applicability to other biological sound production systems, and will allow researchers to study the biomechanics of vocal production as well as the functional congruence and evolution of groups of traits within integrated vocal systems.

  1. A random rule model of surface growth

    NASA Astrophysics Data System (ADS)

    Mello, Bernardo A.

    2015-02-01

    Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.

  2. Human connectome module pattern detection using a new multi-graph MinMax cut model.

    PubMed

    De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng

    2014-01-01

    Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.

  3. Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.

    ERIC Educational Resources Information Center

    Alston, Richard M.; Chi, Wan Fu

    1989-01-01

    Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…

  4. Formal modeling of Gene Ontology annotation predictions based on factor graphs

    NASA Astrophysics Data System (ADS)

    Spetale, Flavio; Murillo, Javier; Tapia, Elizabeth; Arce, Débora; Ponce, Sergio; Bulacio, Pilar

    2016-04-01

    Gene Ontology (GO) is a hierarchical vocabulary for gene product annotation. Its synergy with machine learning classification methods has been widely used for the prediction of protein functions. Current classification methods rely on heuristic solutions to check the consistency with some aspects of the underlying GO structure. In this work we formalize the GO is-a relationship through predicate logic. Moreover, an ontology model based on Forney Factor Graph (FFG) is shown on a general fragment of Cellular Component GO.

  5. Modeling and mitigating noise in graph and manifold representations of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Jin, Can; Bachmann, Charles M.

    2015-05-01

    Over the past decade, manifold and graph representations of hyperspectral imagery (HSI) have been explored widely in HSI applications. There are a large number of data-driven approaches to deriving manifold coordinate representations including Isometric Mapping (ISOMAP)1, Local Linear Embedding (LLE)2, Laplacian Eigenmaps (LE)3, Diffusion Kernels (DK)4, and many related methods. Improvements to specific algorithms have been developed to ease computational burden or otherwise improve algorithm performance. For example, the best way to estimate the size of the locally linear neighborhoods used in graph construction have been addressed6 as well as the best method of linking the manifold representation with classifiers in applications. However, the problem of how to model and mitigate noise in manifold representations of hyperspectral imagery has not been well studied and remains a challenge for graph and manifold representations of hyperspectral imagery and their application. It is relatively easy to apply standard linear methods to remove noise from the data in advance of further processing, however, these approaches by and large treat the noise model in a global sense, using statistics derived from the entire data set and applying the results globally over the data set. Graph and manifold representations by their nature attempt to find an intrinsic representation of the local data structure, so it is natural to ask how can one best represent the noise model in a local sense. In this paper, we explore the approaches to modeling and mitigating noise at a local level, using manifold coordinates of local spectral subsets. The issue of landmark selection of the current landmark ISOMAP algorithm5 is addressed and a workflow is proposed to make use of manifold coordinates of local spectral subsets to make optimal landmark selection and minimize the effect of local noise.

  6. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.

  7. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information. PMID:26978821

  8. The Random-Effect DINA Model

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…

  9. Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.

    PubMed

    Shang, Yilun

    2015-01-01

    Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices. PMID:25822506

  10. Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.

    PubMed

    Shang, Yilun

    2015-01-01

    Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices.

  11. Graph Library

    2007-06-12

    GraphLib is a support library used by other tools to create, manipulate, store, and export graphs. It provides a simple interface to specifS’ arbitrary directed and undirected graphs by adding nodes and edges. Each node and edge can be associated with a set of attributes describing size, color, and shape. Once created, graphs can be manipulated using a set of graph analysis algorithms, including merge, prune, and path coloring operations. GraphLib also has the abilitymore » to export graphs into various open formats such as DOT and GML.« less

  12. A geometric graph model for citation networks of exponentially growing scientific papers

    NASA Astrophysics Data System (ADS)

    Xie, Zheng; Ouyang, Zhenzheng; Liu, Qi; Li, Jianping

    2016-08-01

    In citation networks, the content relativity of papers is a precondition of engendering citations, which is hard to model by a topological graph. A geometric graph is proposed to predict some features of the citation networks with exponentially growing papers, which addresses the precondition by using coordinates of nodes to model the research contents of papers, and geometric distances between nodes to diversities of research contents between papers. Citations between modeled papers are drawn according to a geometric rule, which addresses the precondition as well as some other factors engendering citations, namely academic influences of papers, aging of those influences, and incomplete copying of references. Instead of cumulative advantage of degree, the model illustrates that the scale-free property of modeled networks arises from the inhomogeneous academic influences of modeled papers. The model can also reproduce some other statistical features of citation networks, e.g. in- and out-assortativities, which show the model provides a suitable tool to understand some aspects of citation networks by geometry.

  13. Modeling spatial decisions with graph theory: logging roads and forest fragmentation in the Brazilian Amazon.

    PubMed

    Walker, Robert; Arima, Eugenio; Messina, Joe; Soares-Filho, Britaldo; Perz, Stephen; Vergara, Dante; Sales, Marcio; Pereira, Ritaumaria; Castro, Williams

    2013-01-01

    This article addresses the spatial decision-making of loggers and implications for forest fragmentation in the Amazon basin. It provides a behavioral explanation for fragmentation by modeling how loggers build road networks, typically abandoned upon removal of hardwoods. Logging road networks provide access to land, and the settlers who take advantage of them clear fields and pastures that accentuate their spatial signatures. In shaping agricultural activities, these networks organize emergent patterns of forest fragmentation, even though the loggers move elsewhere. The goal of the article is to explicate how loggers shape their road networks, in order to theoretically explain an important type of forest fragmentation found in the Amazon basin, particularly in Brazil. This is accomplished by adapting graph theory to represent the spatial decision-making of loggers, and by implementing computational algorithms that build graphs interpretable as logging road networks. The economic behavior of loggers is conceptualized as a profit maximization problem, and translated into spatial decision-making by establishing a formal correspondence between mathematical graphs and road networks. New computational approaches, adapted from operations research, are used to construct graphs and simulate spatial decision-making as a function of discount rates, land tenure, and topographic constraints. The algorithms employed bracket a range of behavioral settings appropriate for areas of terras de volutas, public lands that have not been set aside for environmental protection, indigenous peoples, or colonization. The simulation target sites are located in or near so-called Terra do Meio, once a major logging frontier in the lower Amazon Basin. Simulation networks are compared to empirical ones identified by remote sensing and then used to draw inferences about factors influencing the spatial behavior of loggers. Results overall suggest that Amazonia's logging road networks induce more

  14. Random-effects models for longitudinal data

    SciTech Connect

    Laird, N.M.; Ware, J.H.

    1982-12-01

    Models for the analysis of longitudinal data must recognize the relationship between serial observations on the same unit. Multivariate models with general covariance structure are often difficult to apply to highly unbalanced data, whereas two-stage random-effects models can be used easily. In two-stage models, the probability distributions for the response vectors of different individuals belong to a single family, but some random-effects parameters vary across individuals, with a distribution specified at the second stage. A general family of models is discussed, which includes both growth models and repeated-measures models as special cases. A unified approach to fitting these models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed. Two examples are taken from a current epidemiological study of the health effects of air pollution.

  15. Graph model for calculating the properties of saturated monoalcohols based on the additivity of energy terms

    NASA Astrophysics Data System (ADS)

    Grebeshkov, V. V.; Smolyakov, V. M.

    2012-05-01

    A 16-constant additive scheme was derived for calculating the physicochemical properties of saturated monoalcohols CH4O-C9H20O and decomposing the triangular numbers of the Pascal triangle based on the similarity of subgraphs in the molecular graphs (MGs) of the homologous series of these alcohols. It was shown, using this scheme for calculation of properties of saturated monoalcohols as an example, that each coefficient of the scheme (in other words, the number of methods to impose a chain of a definite length i 1, i 2, … on a molecular graph) is the result of the decomposition of the triangular numbers of the Pascal triangle. A linear dependence was found within the adopted classification of structural elements. Sixteen parameters of the schemes were recorded as linear combinations of 17 parameters. The enthalpies of vaporization L {298/K 0} of the saturated monoalcohols CH4O-C9H20O, for which there were no experimental data, were calculated. It was shown that the parameters are not chosen randomly when using the given procedure for constructing an additive scheme by decomposing the triangular numbers of the Pascal triangle.

  16. Guidelines for a graph-theoretic implementation of structural equation modeling

    USGS Publications Warehouse

    Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William

    2012-01-01

    Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for

  17. Random-diluted triangular plaquette model: Study of phase transitions in a kinetically constrained model

    NASA Astrophysics Data System (ADS)

    Franz, Silvio; Gradenigo, Giacomo; Spigler, Stefano

    2016-03-01

    We study how the thermodynamic properties of the triangular plaquette model (TPM) are influenced by the addition of extra interactions. The thermodynamics of the original TPM is trivial, while its dynamics is glassy, as usual in kinetically constrained models. As soon as we generalize the model to include additional interactions, a thermodynamic phase transition appears in the system. The additional interactions we consider are either short ranged, forming a regular lattice in the plane, or long ranged of the small-world kind. In the case of long-range interactions we call the new model the random-diluted TPM. We provide arguments that the model so modified should undergo a thermodynamic phase transition, and that in the long-range case this is a glass transition of the "random first-order" kind. Finally, we give support to our conjectures studying the finite-temperature phase diagram of the random-diluted TPM in the Bethe approximation. This corresponds to the exact calculation on the random regular graph, where free energy and configurational entropy can be computed by means of the cavity equations.

  18. Random-diluted triangular plaquette model: Study of phase transitions in a kinetically constrained model.

    PubMed

    Franz, Silvio; Gradenigo, Giacomo; Spigler, Stefano

    2016-03-01

    We study how the thermodynamic properties of the triangular plaquette model (TPM) are influenced by the addition of extra interactions. The thermodynamics of the original TPM is trivial, while its dynamics is glassy, as usual in kinetically constrained models. As soon as we generalize the model to include additional interactions, a thermodynamic phase transition appears in the system. The additional interactions we consider are either short ranged, forming a regular lattice in the plane, or long ranged of the small-world kind. In the case of long-range interactions we call the new model the random-diluted TPM. We provide arguments that the model so modified should undergo a thermodynamic phase transition, and that in the long-range case this is a glass transition of the "random first-order" kind. Finally, we give support to our conjectures studying the finite-temperature phase diagram of the random-diluted TPM in the Bethe approximation. This corresponds to the exact calculation on the random regular graph, where free energy and configurational entropy can be computed by means of the cavity equations. PMID:27078408

  19. Parabolic Anderson Model in a Dynamic Random Environment: Random Conductances

    NASA Astrophysics Data System (ADS)

    Erhard, D.; den Hollander, F.; Maillard, G.

    2016-06-01

    The parabolic Anderson model is defined as the partial differential equation ∂ u( x, t)/ ∂ t = κ Δ u( x, t) + ξ( x, t) u( x, t), x ∈ ℤ d , t ≥ 0, where κ ∈ [0, ∞) is the diffusion constant, Δ is the discrete Laplacian, and ξ is a dynamic random environment that drives the equation. The initial condition u( x, 0) = u 0( x), x ∈ ℤ d , is typically taken to be non-negative and bounded. The solution of the parabolic Anderson equation describes the evolution of a field of particles performing independent simple random walks with binary branching: particles jump at rate 2 d κ, split into two at rate ξ ∨ 0, and die at rate (- ξ) ∨ 0. In earlier work we looked at the Lyapunov exponents λ p(κ ) = limlimits _{tto ∞} 1/t log {E} ([u(0,t)]p)^{1/p}, quad p in {N} , qquad λ 0(κ ) = limlimits _{tto ∞} 1/2 log u(0,t). For the former we derived quantitative results on the κ-dependence for four choices of ξ : space-time white noise, independent simple random walks, the exclusion process and the voter model. For the latter we obtained qualitative results under certain space-time mixing conditions on ξ. In the present paper we investigate what happens when κΔ is replaced by Δ𝓚, where 𝓚 = {𝓚( x, y) : x, y ∈ ℤ d , x ˜ y} is a collection of random conductances between neighbouring sites replacing the constant conductances κ in the homogeneous model. We show that the associated annealed Lyapunov exponents λ p (𝓚), p ∈ ℕ, are given by the formula λ p({K} ) = {sup} {λ p(κ ) : κ in {Supp} ({K} )}, where, for a fixed realisation of 𝓚, Supp(𝓚) is the set of values taken by the 𝓚-field. We also show that for the associated quenched Lyapunov exponent λ 0(𝓚) this formula only provides a lower bound, and we conjecture that an upper bound holds when Supp(𝓚) is replaced by its convex hull. Our proof is valid for three classes of reversible ξ, and for all 𝓚

  20. Synchronization in the random-field Kuramoto model on complex networks.

    PubMed

    Lopes, M A; Lopes, E M; Yoon, S; Mendes, J F F; Goltsev, A V

    2016-07-01

    We study the impact of random pinning fields on the emergence of synchrony in the Kuramoto model on complete graphs and uncorrelated random complex networks. We consider random fields with uniformly distributed directions and homogeneous and heterogeneous (Gaussian) field magnitude distribution. In our analysis, we apply the Ott-Antonsen method and the annealed-network approximation to find the critical behavior of the order parameter. In the case of homogeneous fields, we find a tricritical point above which a second-order phase transition gives place to a first-order phase transition when the network is either fully connected or scale-free with the degree exponent γ>5. Interestingly, for scale-free networks with 2<γ≤5, the phase transition is of second-order at any field magnitude, except for degree distributions with γ=3 when the transition is of infinite order at K_{c}=0 independent of the random fields. Contrary to the Ising model, even strong Gaussian random fields do not suppress the second-order phase transition in both complete graphs and scale-free networks, although the fields increase the critical coupling for γ>3. Our simulations support these analytical results. PMID:27575149

  1. Synchronization in the random-field Kuramoto model on complex networks

    NASA Astrophysics Data System (ADS)

    Lopes, M. A.; Lopes, E. M.; Yoon, S.; Mendes, J. F. F.; Goltsev, A. V.

    2016-07-01

    We study the impact of random pinning fields on the emergence of synchrony in the Kuramoto model on complete graphs and uncorrelated random complex networks. We consider random fields with uniformly distributed directions and homogeneous and heterogeneous (Gaussian) field magnitude distribution. In our analysis, we apply the Ott-Antonsen method and the annealed-network approximation to find the critical behavior of the order parameter. In the case of homogeneous fields, we find a tricritical point above which a second-order phase transition gives place to a first-order phase transition when the network is either fully connected or scale-free with the degree exponent γ >5 . Interestingly, for scale-free networks with 2 <γ ≤5 , the phase transition is of second-order at any field magnitude, except for degree distributions with γ =3 when the transition is of infinite order at Kc=0 independent of the random fields. Contrary to the Ising model, even strong Gaussian random fields do not suppress the second-order phase transition in both complete graphs and scale-free networks, although the fields increase the critical coupling for γ >3 . Our simulations support these analytical results.

  2. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1988-08-01

    Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.

  3. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1987-01-01

    The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.

  4. What is a complex graph?

    NASA Astrophysics Data System (ADS)

    Kim, Jongkwang; Wilhelm, Thomas

    2008-04-01

    Many papers published in recent years show that real-world graphs G(n,m) ( n nodes, m edges) are more or less “complex” in the sense that different topological features deviate from random graphs. Here we narrow the definition of graph complexity and argue that a complex graph contains many different subgraphs. We present different measures that quantify this complexity, for instance C1e, the relative number of non-isomorphic one-edge-deleted subgraphs (i.e. DECK size). However, because these different subgraph measures are computationally demanding, we also study simpler complexity measures focussing on slightly different aspects of graph complexity. We consider heuristically defined “product measures”, the products of two quantities which are zero in the extreme cases of a path and clique, and “entropy measures” quantifying the diversity of different topological features. The previously defined network/graph complexity measures Medium Articulation and Offdiagonal complexity ( OdC) belong to these two classes. We study OdC measures in some detail and compare it with our new measures. For all measures, the most complex graph G has a medium number of edges, between the edge numbers of the minimum and the maximum connected graph n-1graph complexity measures are characterized with the help of different example graphs. For all measures the corresponding time complexity is given. Finally, we discuss the complexity of 33 real-world graphs of different biological, social and economic systems with the six computationally most simple measures (including OdC). The complexities of the real graphs are compared with average complexities of two different random graph versions: complete random graphs (just fixed n,m) and rewired graphs with fixed node degrees.

  5. Mathematic Modeling of Complex Hydraulic Machinery Systems When Evaluating Reliability Using Graph Theory

    NASA Astrophysics Data System (ADS)

    Zemenkova, M. Yu; Shipovalov, A. N.; Zemenkov, Yu D.

    2016-04-01

    The main technological equipment of pipeline transport of hydrocarbons are hydraulic machines. During transportation of oil mainly used of centrifugal pumps, designed to work in the “pumping station-pipeline” system. Composition of a standard pumping station consists of several pumps, complex hydraulic piping. The authors have developed a set of models and algorithms for calculating system reliability of pumps. It is based on the theory of reliability. As an example, considered one of the estimation methods with the application of graph theory.

  6. Phase-locked patterns of the Kuramoto model on 3-regular graphs

    NASA Astrophysics Data System (ADS)

    DeVille, Lee; Ermentrout, Bard

    2016-09-01

    We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.

  7. Reaction spreading on graphs

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Chibbaro, Sergio; Vergni, Davide; Vulpiani, Angelo

    2012-11-01

    We study reaction-diffusion processes on graphs through an extension of the standard reaction-diffusion equation starting from first principles. We focus on reaction spreading, i.e., on the time evolution of the reaction product M(t). At variance with pure diffusive processes, characterized by the spectral dimension ds, the important quantity for reaction spreading is found to be the connectivity dimension dl. Numerical data, in agreement with analytical estimates based on the features of n independent random walkers on the graph, show that M(t)˜tdl. In the case of Erdös-Renyi random graphs, the reaction product is characterized by an exponential growth M(t)˜eαt with α proportional to ln, where is the average degree of the graph.

  8. Quantum walks on quotient graphs

    SciTech Connect

    Krovi, Hari; Brun, Todd A.

    2007-06-15

    A discrete-time quantum walk on a graph {gamma} is the repeated application of a unitary evolution operator to a Hilbert space corresponding to the graph. If this unitary evolution operator has an associated group of symmetries, then for certain initial states the walk will be confined to a subspace of the original Hilbert space. Symmetries of the original graph, given by its automorphism group, can be inherited by the evolution operator. We show that a quantum walk confined to the subspace corresponding to this symmetry group can be seen as a different quantum walk on a smaller quotient graph. We give an explicit construction of the quotient graph for any subgroup H of the automorphism group and illustrate it with examples. The automorphisms of the quotient graph which are inherited from the original graph are the original automorphism group modulo the subgroup H used to construct it. The quotient graph is constructed by removing the symmetries of the subgroup H from the original graph. We then analyze the behavior of hitting times on quotient graphs. Hitting time is the average time it takes a walk to reach a given final vertex from a given initial vertex. It has been shown in earlier work [Phys. Rev. A 74, 042334 (2006)] that the hitting time for certain initial states of a quantum walks can be infinite, in contrast to classical random walks. We give a condition which determines whether the quotient graph has infinite hitting times given that they exist in the original graph. We apply this condition for the examples discussed and determine which quotient graphs have infinite hitting times. All known examples of quantum walks with hitting times which are short compared to classical random walks correspond to systems with quotient graphs much smaller than the original graph; we conjecture that the existence of a small quotient graph with finite hitting times is necessary for a walk to exhibit a quantum speedup.

  9. Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids.

    PubMed

    José, Marco V; Morgado, Eberto R; Guimarães, Romeu Cardoso; Zamudio, Gabriel S; de Farías, Sávio Torres; Bobadilla, Juan R; Sosa, Daniela

    2014-01-01

    Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state. PMID:25370377

  10. Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids

    PubMed Central

    José, Marco V.; Morgado, Eberto R.; Guimarães, Romeu Cardoso; Zamudio, Gabriel S.; de Farías, Sávio Torres; Bobadilla, Juan R.; Sosa, Daniela

    2014-01-01

    Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state. PMID:25370377

  11. Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids.

    PubMed

    José, Marco V; Morgado, Eberto R; Guimarães, Romeu Cardoso; Zamudio, Gabriel S; de Farías, Sávio Torres; Bobadilla, Juan R; Sosa, Daniela

    2014-08-11

    Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state.

  12. Spectral fluctuations of quantum graphs

    NASA Astrophysics Data System (ADS)

    Pluhař, Z.; Weidenmüller, H. A.

    2014-10-01

    We prove the Bohigas-Giannoni-Schmit conjecture in its most general form for completely connected simple graphs with incommensurate bond lengths. We show that for graphs that are classically mixing (i.e., graphs for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap), the generating functions for all (P,Q) correlation functions for both closed and open graphs coincide (in the limit of infinite graph size) with the corresponding expressions of random-matrix theory, both for orthogonal and for unitary symmetry.

  13. Spectral fluctuations of quantum graphs

    SciTech Connect

    Pluhař, Z.; Weidenmüller, H. A.

    2014-10-15

    We prove the Bohigas-Giannoni-Schmit conjecture in its most general form for completely connected simple graphs with incommensurate bond lengths. We show that for graphs that are classically mixing (i.e., graphs for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap), the generating functions for all (P,Q) correlation functions for both closed and open graphs coincide (in the limit of infinite graph size) with the corresponding expressions of random-matrix theory, both for orthogonal and for unitary symmetry.

  14. On the mixing time of geographical threshold graphs

    SciTech Connect

    Bradonjic, Milan

    2009-01-01

    In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). We specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).

  15. The effects of node exclusion on the centrality measures in graph models of interacting economic agents

    NASA Astrophysics Data System (ADS)

    Caetano, Marco Antonio Leonel; Yoneyama, Takashi

    2015-07-01

    This work concerns the study of the effects felt by a network as a whole when a specific node is perturbed. Many real world systems can be described by network models in which the interactions of the various agents can be represented as an edge of a graph. With a graph model in hand, it is possible to evaluate the effect of deleting some of its edges on the architecture and values of nodes of the network. Eventually a node may end up isolated from the rest of the network and an interesting problem is to have a quantitative measure of the impact of such an event. For instance, in the field of finance, the network models are very popular and the proposed methodology allows to carry out "what if" tests in terms of weakening the links between the economic agents, represented as nodes. The two main concepts employed in the proposed methodology are (i) the vibrational IC-Information Centrality, which can provide a measure of the relative importance of a particular node in a network and (ii) autocatalytic networks that can indicate the evolutionary trends of the network. Although these concepts were originally proposed in the context of other fields of knowledge, they were also found to be useful in analyzing financial networks. In order to illustrate the applicability of the proposed methodology, a case of study using the actual data comprising stock market indices of 12 countries is presented.

  16. Two-Stage Modelling Of Random Phenomena

    NASA Astrophysics Data System (ADS)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  17. A combined crystal plasticity and graph-based vertex model of dynamic recrystallization at large deformations

    NASA Astrophysics Data System (ADS)

    Mellbin, Y.; Hallberg, H.; Ristinmaa, M.

    2015-06-01

    A mesoscale model of microstructure evolution is formulated in the present work by combining a crystal plasticity model with a graph-based vertex algorithm. This provides a versatile formulation capable of capturing finite-strain deformations, development of texture and microstructure evolution through recrystallization. The crystal plasticity model is employed in a finite element setting and allows tracing of stored energy build-up in the polycrystal microstructure and concurrent reorientation of the crystal lattices in the grains. This influences the progression of recrystallization as nucleation occurs at sites with sufficient stored energy and since the grain boundary mobility and energy is allowed to vary with crystallographic misorientation across the boundaries. The proposed graph-based vertex model describes the topological changes to the grain microstructure and keeps track of the grain inter-connectivity. Through homogenization, the macroscopic material response is also obtained. By the proposed modeling approach, grain structure evolution at large deformations as well as texture development are captured. This is in contrast to most other models of recrystallization which are usually limited by assumptions of one or the other of these factors. In simulation examples, the model is in the present study shown to capture the salient features of dynamic recrystallization, including the effects of varying initial grain size and strain rate on the transitions between single-peak and multiple-peak oscillating flow stress behavior. Also the development of recrystallization texture and the influence of different assumptions on orientation of recrystallization nuclei are investigated. Further, recrystallization kinetics are discussed and compared to classical JMAK theory. To promote computational efficiency, the polycrystal plasticity algorithm is parallelized through a GPU implementation that was recently proposed by the authors.

  18. Mining and Indexing Graph Databases

    ERIC Educational Resources Information Center

    Yuan, Dayu

    2013-01-01

    Graphs are widely used to model structures and relationships of objects in various scientific and commercial fields. Chemical molecules, proteins, malware system-call dependencies and three-dimensional mechanical parts are all modeled as graphs. In this dissertation, we propose to mine and index those graph data to enable fast and scalable search.…

  19. Random diffusion model with structure corrections

    NASA Astrophysics Data System (ADS)

    McCowan, David D.; Mazenko, Gene F.

    2010-05-01

    The random diffusion model is a continuum model for a conserved scalar density field ϕ driven by diffusive dynamics where the bare diffusion coefficient is density dependent. We generalize the model from one with a sharp wave-number cutoff to one with a more natural large wave-number cutoff. We investigate whether the features seen previously—namely, a slowing down of the system and the development of a prepeak in the dynamic structure factor at a wave number below the first structure peak—survive in this model. A method for extracting information about a hidden prepeak in experimental data is presented.

  20. Using Specialized Graph Paper.

    ERIC Educational Resources Information Center

    James, C.

    1988-01-01

    Discusses the use of logarithm and reciprocal graphs in the college physics classroom. Provides examples, such as electrical conductivity, reliability function in the Weibull model, and the Clausius-Clapeyron equation for latent heat of vaporation. Shows graphs with weighting of points. (YP)

  1. Conformational transitions in random heteropolymer models

    NASA Astrophysics Data System (ADS)

    Blavatska, Viktoria; Janke, Wolfhard

    2014-01-01

    We study the conformational properties of heteropolymers containing two types of monomers A and B, modeled as self-attracting self-avoiding random walks on a regular lattice. Such a model can describe in particular the sequences of hydrophobic and hydrophilic residues in proteins [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] and polyampholytes with oppositely charged groups [Y. Kantor and M. Kardar, Europhys. Lett. 28, 169 (1994)]. Treating the sequences of the two types of monomers as quenched random variables, we provide a systematic analysis of possible generalizations of this model. To this end we apply the pruned-enriched Rosenbluth chain-growth algorithm, which allows us to obtain the phase diagrams of extended and compact states coexistence as function of both the temperature and fraction of A and B monomers along the heteropolymer chain.

  2. Combining computational models, semantic annotations and simulation experiments in a graph database

    PubMed Central

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  3. The infinite hidden Markov random field model.

    PubMed

    Chatzis, Sotirios P; Tsechpenakis, Gabriel

    2010-06-01

    Hidden Markov random field (HMRF) models are widely used for image segmentation, as they appear naturally in problems where a spatially constrained clustering scheme is asked for. A major limitation of HMRF models concerns the automatic selection of the proper number of their states, i.e., the number of region clusters derived by the image segmentation procedure. Existing methods, including likelihood- or entropy-based criteria, and reversible Markov chain Monte Carlo methods, usually tend to yield noisy model size estimates while imposing heavy computational requirements. Recently, Dirichlet process (DP, infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori; infinite mixture models based on the original DP or spatially constrained variants of it have been applied in unsupervised image segmentation applications showing promising results. Under this motivation, to resolve the aforementioned issues of HMRF models, in this paper, we introduce a nonparametric Bayesian formulation for the HMRF model, the infinite HMRF model, formulated on the basis of a joint Dirichlet process mixture (DPM) and Markov random field (MRF) construction. We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally demonstrate its advantages over competing methodologies.

  4. Chain Graph Models to Elicit the Structure of a Bayesian Network

    PubMed Central

    Stefanini, Federico M.

    2014-01-01

    Bayesian networks are possibly the most successful graphical models to build decision support systems. Building the structure of large networks is still a challenging task, but Bayesian methods are particularly suited to exploit experts' degree of belief in a quantitative way while learning the network structure from data. In this paper details are provided about how to build a prior distribution on the space of network structures by eliciting a chain graph model on structural reference features. Several structural features expected to be often useful during the elicitation are described. The statistical background needed to effectively use this approach is summarized, and some potential pitfalls are illustrated. Finally, a few seminal contributions from the literature are reformulated in terms of structural features. PMID:24688427

  5. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  7. Modeling stereopsis via Markov random field.

    PubMed

    Ming, Yansheng; Hu, Zhanyi

    2010-08-01

    Markov random field (MRF) and belief propagation have given birth to stereo vision algorithms with top performance. This article explores their biological plausibility. First, an MRF model guided by physiological and psychophysical facts was designed. Typically an MRF-based stereo vision algorithm employs a likelihood function that reflects the local similarity of two regions and a potential function that models the continuity constraint. In our model, the likelihood function is constructed on the basis of the disparity energy model because complex cells are considered as front-end disparity encoders in the visual pathway. Our likelihood function is also relevant to several psychological findings. The potential function in our model is constrained by the psychological finding that the strength of the cooperative interaction minimizing relative disparity decreases as the separation between stimuli increases. Our model is tested on three kinds of stereo images. In simulations on images with repetitive patterns, we demonstrate that our model could account for the human depth percepts that were previously explained by the second-order mechanism. In simulations on random dot stereograms and natural scene images, we demonstrate that false matches introduced by the disparity energy model can be reliably removed using our model. A comparison with the coarse-to-fine model shows that our model is able to compute the absolute disparity of small objects with larger relative disparity. We also relate our model to several physiological findings. The hypothesized neurons of the model are selective for absolute disparity and have facilitative extra receptive field. There are plenty of such neurons in the visual cortex. In conclusion, we think that stereopsis can be implemented by neural networks resembling MRF.

  8. Optimized Graph Search Using Multi-Level Graph Clustering

    NASA Astrophysics Data System (ADS)

    Kala, Rahul; Shukla, Anupam; Tiwari, Ritu

    Graphs find a variety of use in numerous domains especially because of their capability to model common problems. The social networking graphs that are used for social networking analysis, a feature given by various social networking sites are an example of this. Graphs can also be visualized in the search engines to carry search operations and provide results. Various searching algorithms have been developed for searching in graphs. In this paper we propose that the entire network graph be clustered. The larger graphs are clustered to make smaller graphs. These smaller graphs can again be clustered to further reduce the size of graph. The search is performed on the smallest graph to identify the general path, which may be further build up to actual nodes by working on the individual clusters involved. Since many searches are carried out on the same graph, clustering may be done once and the data may be used for multiple searches over the time. If the graph changes considerably, only then we may re-cluster the graph.

  9. Graph ensemble boosting for imbalanced noisy graph stream classification.

    PubMed

    Pan, Shirui; Wu, Jia; Zhu, Xingquan; Zhang, Chengqi

    2015-05-01

    Many applications involve stream data with structural dependency, graph representations, and continuously increasing volumes. For these applications, it is very common that their class distributions are imbalanced with minority (or positive) samples being only a small portion of the population, which imposes significant challenges for learning models to accurately identify minority samples. This problem is further complicated with the presence of noise, because they are similar to minority samples and any treatment for the class imbalance may falsely focus on the noise and result in deterioration of accuracy. In this paper, we propose a classification model to tackle imbalanced graph streams with noise. Our method, graph ensemble boosting, employs an ensemble-based framework to partition graph stream into chunks each containing a number of noisy graphs with imbalanced class distributions. For each individual chunk, we propose a boosting algorithm to combine discriminative subgraph pattern selection and model learning as a unified framework for graph classification. To tackle concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the boosting framework can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-life imbalanced graph streams demonstrate clear benefits of our boosting design for handling imbalanced noisy graph stream.

  10. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  11. Discriminatively Trained And-Or Graph Models for Object Shape Detection.

    PubMed

    Lin, Liang; Wang, Xiaolong; Yang, Wei; Lai, Jian-Huang

    2015-05-01

    In this paper, we investigate a novel reconfigurable part-based model, namely And-Or graph model, to recognize object shapes in images. Our proposed model consists of four layers: leaf-nodes at the bottom are local classifiers for detecting contour fragments; or-nodes above the leaf-nodes function as the switches to activate their child leaf-nodes, making the model reconfigurable during inference; and-nodes in a higher layer capture holistic shape deformations; one root-node on the top, which is also an or-node, activates one of its child and-nodes to deal with large global variations (e.g. different poses and views). We propose a novel structural optimization algorithm to discriminatively train the And-Or model from weakly annotated data. This algorithm iteratively determines the model structures (e.g. the nodes and their layouts) along with the parameter learning. On several challenging datasets, our model demonstrates the effectiveness to perform robust shape-based object detection against background clutter and outperforms the other state-of-the-art approaches. We also release a new shape database with annotations, which includes more than 1500 challenging shape instances, for recognition and detection. PMID:26353321

  12. Directed acyclic graph-based technology mapping of genetic circuit models.

    PubMed

    Roehner, Nicholas; Myers, Chris J

    2014-08-15

    As engineering foundations such as standards and abstraction begin to mature within synthetic biology, it is vital that genetic design automation (GDA) tools be developed to enable synthetic biologists to automatically select standardized DNA components from a library to meet the behavioral specification for a genetic circuit. To this end, we have developed a genetic technology mapping algorithm that builds on the directed acyclic graph (DAG) based mapping techniques originally used to select parts for digital electronic circuit designs and implemented it in our GDA tool, iBioSim. It is among the first genetic technology mapping algorithms to adapt techniques from electronic circuit design, in particular the use of a cost function to guide the search for an optimal solution, and perhaps that which makes the greatest use of standards for describing genetic function and structure to represent design specifications and component libraries. This paper demonstrates the use of our algorithm to map the specifications for three different genetic circuits against four randomly generated libraries of increasing size to evaluate its performance against both exhaustive search and greedy variants for finding optimal and near-optimal solutions.

  13. Campaign graphs

    SciTech Connect

    Simmons, G.J.

    1988-01-01

    We define a class of geometrical constructions in the plane in which each (unextended) line lies on (precisely) k points, and every point is an endpoint of (precisely) one line. We will refer to any construction satisfying these conditions as a campaign graph, or as a k-campaign graph if the value of k isn't clear from the context. A k-campaign graph, G, is said to be critical if no subgraph of G is also a k-campaign graph. 11 figs.

  14. Variable effort fishing models in random environments.

    PubMed

    Braumann, C A

    1999-03-01

    We study the growth of populations in a random environment subjected to variable effort fishing policies. The models used are stochastic differential equations and the environmental fluctuations may either affect an intrinsic growth parameter or be of the additive noise type. Density-dependent natural growth and fishing policies are of very general form so that our results will be model independent. We obtain conditions on the fishing policies for non-extinction and for non-fixation at the carrying capacity that are very similar to the conditions obtained for the corresponding deterministic model. We also obtain conditions for the existence of stationary distributions (as well as expressions for such distributions) very similar to conditions for the existence of an equilibrium in the corresponding deterministic model. The results obtained provide minimal requirements for the choice of a wise density-dependent fishing policy.

  15. Global dynamic modeling of electro-hydraulic 3-UPS/S parallel stabilized platform by bond graph

    NASA Astrophysics Data System (ADS)

    Zhang, Lijie; Guo, Fei; Li, Yongquan; Lu, Wenjuan

    2016-08-01

    Dynamic modeling of a parallel manipulator(PM) is an important issue. A complete PM system is actually composed of multiple physical domains. As PMs are widely used in various fields, the importance of modeling the global dynamic model of the PM system becomes increasingly prominent. Currently there lacks further research in global dynamic modeling. A unified modeling approach for the multi-energy domains PM system is proposed based on bond graph and a global dynamic model of the 3-UPS/S parallel stabilized platform involving mechanical and electrical-hydraulic elements is built. Firstly, the screw bond graph theory is improved based on the screw theory, the modular joint model is modeled and the normalized dynamic model of the mechanism is established. Secondly, combined with the electro-hydraulic servo system model built by traditional bond graph, the global dynamic model of the system is obtained, and then the motion, force and power of any element can be obtained directly. Lastly, the experiments and simulations of the driving forces, pressure and flow are performed, and the results show that, the theoretical calculation results of the driving forces are in accord with the experimental ones, and the pressure and flow of the first limb and the third limb are symmetry with each other. The results are reasonable and verify the correctness and effectiveness of the model and the method. The proposed dynamic modeling method provides a reference for modeling of other multi-energy domains system which contains complex PM.

  16. Agent-based simulation of building evacuation using a grid graph-based model

    NASA Astrophysics Data System (ADS)

    Tan, L.; Lin, H.; Hu, M.; Che, W.

    2014-02-01

    Shifting from macroscope models to microscope models, the agent-based approach has been widely used to model crowd evacuation as more attentions are paid on individualized behaviour. Since indoor evacuation behaviour is closely related to spatial features of the building, effective representation of indoor space is essential for the simulation of building evacuation. The traditional cell-based representation has limitations in reflecting spatial structure and is not suitable for topology analysis. Aiming at incorporating powerful topology analysis functions of GIS to facilitate agent-based simulation of building evacuation, we used a grid graph-based model in this study to represent the indoor space. Such model allows us to establish an evacuation network at a micro level. Potential escape routes from each node thus could be analysed through GIS functions of network analysis considering both the spatial structure and route capacity. This would better support agent-based modelling of evacuees' behaviour including route choice and local movements. As a case study, we conducted a simulation of emergency evacuation from the second floor of an official building using Agent Analyst as the simulation platform. The results demonstrate the feasibility of the proposed method, as well as the potential of GIS in visualizing and analysing simulation results.

  17. A Systematic Composite Service Design Modeling Method Using Graph-Based Theory

    PubMed Central

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358

  18. A systematic composite service design modeling method using graph-based theory.

    PubMed

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  19. Model reduction for stochastic CaMKII reaction kinetics in synapses by graph-constrained correlation dynamics

    PubMed Central

    Johnson, Todd; Bartol, Tom; Sejnowski, Terrence; Mjolsness, Eric

    2015-01-01

    Astochastic reaction network model of Ca2+ dynamics in synapses (Pepke et al PLoS Comput. Biol. 6 e1000675) is expressed and simulated using rule-based reaction modeling notation in dynamical grammars and in MCell. The model tracks the response of calmodulin and CaMKII to calcium influx in synapses. Data from numerically intensive simulations is used to train a reduced model that, out of sample, correctly predicts the evolution of interaction parameters characterizing the instantaneous probability distribution over molecular states in the much larger fine-scale models. The novel model reduction method, ‘graph-constrained correlation dynamics’, requires a graph of plausible state variables and interactions as input. It parametrically optimizes a set of constant coefficients appearing in differential equations governing the time-varying interaction parameters that determine all correlations between variables in the reduced model at any time slice. PMID:26086598

  20. Neurally and ocularly informed graph-based models for searching 3D environments

    NASA Astrophysics Data System (ADS)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  1. Graphing Predictions

    ERIC Educational Resources Information Center

    Connery, Keely Flynn

    2007-01-01

    Graphing predictions is especially important in classes where relationships between variables need to be explored and derived. In this article, the author describes how his students sketch the graphs of their predictions before they begin their investigations on two laboratory activities: Distance Versus Time Cart Race Lab and Resistance; and…

  2. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  3. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  4. Robust deformable and occluded object tracking with dynamic graph.

    PubMed

    Cai, Zhaowei; Wen, Longyin; Lei, Zhen; Vasconcelos, Nuno; Li, Stan Z

    2014-12-01

    While some efforts have been paid to handle deformation and occlusion in visual tracking, they are still great challenges. In this paper, a dynamic graph-based tracker (DGT) is proposed to address these two challenges in a unified framework. In the dynamic target graph, nodes are the target local parts encoding appearance information, and edges are the interactions between nodes encoding inner geometric structure information. This graph representation provides much more information for tracking in the presence of deformation and occlusion. The target tracking is then formulated as tracking this dynamic undirected graph, which is also a matching problem between the target graph and the candidate graph. The local parts within the candidate graph are separated from the background with Markov random field, and spectral clustering is used to solve the graph matching. The final target state is determined through a weighted voting procedure according to the reliability of part correspondence, and refined with recourse to a foreground/background segmentation. An effective online updating mechanism is proposed to update the model, allowing DGT to robustly adapt to variations of target structure. Experimental results show improved performance over several state-of-the-art trackers, in various challenging scenarios.

  5. Topological structure of dictionary graphs

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk; Krzemiński, Mark

    2009-09-01

    We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers—that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 103 and 104. A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.

  6. A bond graph model for the sample extraction/injection system of a microsized gas chromatographic instrument

    NASA Astrophysics Data System (ADS)

    Lin, Jie; Wang, Wanjun; Murphy, Michael C.; Overton, Edward

    1996-09-01

    A bond graph model of the sample extraction/injection system of a prototype portable gas chromatographic instrument has been developed. In addition to performing the same functions as current portable gas chromatographs (GCs), the new generation of GC instruments is designed to perform extraction of analytes from liquid and solid samples. The prototype instrument achieves these improvements by taking of advantage of microfabrication technologies and microprocessor control in the design. A novel sample extraction/injection module is essential to the improved performance of the portable instrument, which will include microfabricated components such as inlets, interface chips, fluid channels, control valves, optimal heater/sensor combinations, and multiport connectors. In order to achieve the desired analytical performance, all of the major components are heated to 250 °C during different stages of a sample analysis. Predicting the performance of the system in this operating regime requires the modeling and analysis of system behavior in two interacting energy domains, fluid and thermal. This article represents the first effort to understand the dynamic behavior of the thermofluid aspect of micro-GC instruments and one of the first attempts to apply the widely-used bond graph technique to modeling and analysis of microsized thermofluid systems. Simulation results using the bond graph model closely match available experimental data, with differences typically less than 10%. This demonstrates that fluid dynamic theory for macroscale systems, and the bond graph method based on it, can be readily applied to microscale systems with these dimensions. The bond graph method can be a useful computer-aided design tool for the development of a new generation of truly integrated micro-GC instruments and sensors fabricated with micromachining technology.

  7. A random effects epidemic-type aftershock sequence model

    PubMed Central

    Lin, Feng-Chang

    2013-01-01

    We consider an extension of the temporal epidemic-type aftershock sequence (ETAS) model with random effects as a special case of a well-known doubly stochastic self-exciting point process. The new model arises from a deterministic function that is randomly scaled by a nonnegative random variable, which is unobservable but assumed to follow either positive stable or one-parameter gamma distribution with unit mean. Both random effects models are of interest although the one-parameter gamma random effects model is more popular when modeling associated survival times. Our estimation is based on the maximum likelihood approach with marginalized intensity. The methods are shown to perform well in simulation experiments. When applied to an earthquake sequence on the east coast of Taiwan, the extended model with positive stable random effects provides a better model fit, compared to the original ETAS model and the extended model with one-parameter gamma random effects. PMID:24039322

  8. A random effects epidemic-type aftershock sequence model.

    PubMed

    Lin, Feng-Chang

    2011-04-01

    We consider an extension of the temporal epidemic-type aftershock sequence (ETAS) model with random effects as a special case of a well-known doubly stochastic self-exciting point process. The new model arises from a deterministic function that is randomly scaled by a nonnegative random variable, which is unobservable but assumed to follow either positive stable or one-parameter gamma distribution with unit mean. Both random effects models are of interest although the one-parameter gamma random effects model is more popular when modeling associated survival times. Our estimation is based on the maximum likelihood approach with marginalized intensity. The methods are shown to perform well in simulation experiments. When applied to an earthquake sequence on the east coast of Taiwan, the extended model with positive stable random effects provides a better model fit, compared to the original ETAS model and the extended model with one-parameter gamma random effects.

  9. Graph Theory

    SciTech Connect

    Sanfilippo, Antonio P.

    2005-12-27

    Graph theory is a branch of discrete combinatorial mathematics that studies the properties of graphs. The theory was pioneered by the Swiss mathematician Leonhard Euler in the 18th century, commenced its formal development during the second half of the 19th century, and has witnessed substantial growth during the last seventy years, with applications in areas as diverse as engineering, computer science, physics, sociology, chemistry and biology. Graph theory has also had a strong impact in computational linguistics by providing the foundations for the theory of features structures that has emerged as one of the most widely used frameworks for the representation of grammar formalisms.

  10. Graph-based modeling of tandem repeats improves global multiple sequence alignment.

    PubMed

    Szalkowski, Adam M; Anisimova, Maria

    2013-09-01

    Tandem repeats (TRs) are often present in proteins with crucial functions, responsible for resistance, pathogenicity and associated with infectious or neurodegenerative diseases. This motivates numerous studies of TRs and their evolution, requiring accurate multiple sequence alignment. TRs may be lost or inserted at any position of a TR region by replication slippage or recombination, but current methods assume fixed unit boundaries, and yet are of high complexity. We present a new global graph-based alignment method that does not restrict TR unit indels by unit boundaries. TR indels are modeled separately and penalized using the phylogeny-aware alignment algorithm. This ensures enhanced accuracy of reconstructed alignments, disentangling TRs and measuring indel events and rates in a biologically meaningful way. Our method detects not only duplication events but also all changes in TR regions owing to recombination, strand slippage and other events inserting or deleting TR units. We evaluate our method by simulation incorporating TR evolution, by either sampling TRs from a profile hidden Markov model or by mimicking strand slippage with duplications. The new method is illustrated on a family of type III effectors, a pathogenicity determinant in agriculturally important bacteria Ralstonia solanacearum. We show that TR indel rate variation contributes to the diversification of this protein family.

  11. An accessibility graph-based model to optimize tsunami evacuation sites and routes in Martinique, France

    NASA Astrophysics Data System (ADS)

    Péroche, M.; Leone, F.; Gutton, R.

    2014-01-01

    The risk of tsunami threatens the whole Caribbean coastline especially the Lesser Antilles. The first available models of tsunami propagation estimate that the travel time from the closest seismic sources would only take few minutes to impact the Martinique Island. Considering this threat, the most effective measure is a planned and organized evacuation of the coastal population. This requires an efficient regional warning system, estimation of the maximum expected tsunami flood height, preparation of the population to evacuate, and drawing up of local and regional emergency plans. In order to produce an efficient evacuation plan, we have to assess the number of people at risk, the potential evacuation routes, the safe areas and the available time to evacuate. However, this essential information is still lacking in the French West Indies emergency plans. This paper proposes a model of tsunami evacuation sites accessibility for Martinique directly addressed to decision makers. It is based on a population database at a local scale, the development of connected graphs of roads, the identification of potential safe areas and the velocity setting for pedestrians. Evacuation routes are calculated using the Dijkstra's algorithm which gives the shortest path between areas at risk and designated evacuation sites. The first results allow us to map the theoretical times and routes to keep the exposed population safe and to compare these results with a tsunami travel time scenario.

  12. Ancestry assessment using random forest modeling.

    PubMed

    Hefner, Joseph T; Spradley, M Kate; Anderson, Bruce

    2014-05-01

    A skeletal assessment of ancestry relies on morphoscopic traits and skeletal measurements. Using a sample of American Black (n = 38), American White (n = 39), and Southwest Hispanics (n = 72), the present study investigates whether these data provide similar biological information and combines both data types into a single classification using a random forest model (RFM). Our results indicate that both data types provide similar information concerning the relationships among population groups. Also, by combining both in an RFM, the correct allocation of ancestry for an unknown cranium increases. The distribution of cross-validated grouped cases correctly classified using discriminant analyses and RFMs ranges between 75.4% (discriminant function analysis, morphoscopic data only) and 89.6% (RFM). Unlike the traditional, experience-based approach using morphoscopic traits, the inclusion of both data types in a single analysis is a quantifiable approach accounting for more variation within and between groups, reducing misclassification rates, and capturing aspects of cranial shape, size, and morphology.

  13. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  14. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  15. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    PubMed

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. PMID:25393432

  16. Graph-theoretic quantum system modelling for neuronal microtubules as hierarchical clustered quantum Hopfield networks

    NASA Astrophysics Data System (ADS)

    Srivastava, D. P.; Sahni, V.; Satsangi, P. S.

    2014-08-01

    Graph-theoretic quantum system modelling (GTQSM) is facilitated by considering the fundamental unit of quantum computation and information, viz. a quantum bit or qubit as a basic building block. Unit directional vectors "ket 0" and "ket 1" constitute two distinct fundamental quantum across variable orthonormal basis vectors, for the Hilbert space, specifying the direction of propagation of information, or computation data, while complementary fundamental quantum through, or flow rate, variables specify probability parameters, or amplitudes, as surrogates for scalar quantum information measure (von Neumann entropy). This paper applies GTQSM in continuum of protein heterodimer tubulin molecules of self-assembling polymers, viz. microtubules in the brain as a holistic system of interacting components representing hierarchical clustered quantum Hopfield network, hQHN, of networks. The quantum input/output ports of the constituent elemental interaction components, or processes, of tunnelling interactions and Coulombic bidirectional interactions are in cascade and parallel interconnections with each other, while the classical output ports of all elemental components are interconnected in parallel to accumulate micro-energy functions generated in the system as Hamiltonian, or Lyapunov, energy function. The paper presents an insight, otherwise difficult to gain, for the complex system of systems represented by clustered quantum Hopfield network, hQHN, through the application of GTQSM construct.

  17. Automatic segmentation of lymph vessel wall using optimal surface graph cut and hidden Markov Models.

    PubMed

    Jones, Jonathan-Lee; Essa, Ehab; Xie, Xianghua

    2015-08-01

    We present a novel method to segment the lymph vessel wall in confocal microscopy images using Optimal Surface Segmentation (OSS) and hidden Markov Models (HMM). OSS is used to preform a pre-segmentation on the images, to act as the initial state for the HMM. We utilize a steerable filter to determine edge based filters for both of these segmentations, and use these features to build Gaussian probability distributions for both the vessel walls and the background. From this we infer the emission probability for the HMM, and the transmission probability is learned using a Baum-Welch algorithm. We transform the segmentation problem into one of cost minimization, with each node in the graph corresponding to one state, and the weight for each node being defined using its emission probability. We define the inter-relations between neighboring nodes using the transmission probability. Having constructed the problem, it is solved using the Viterbi algorithm, allowing the vessel to be reconstructed. The optimal solution can be found in polynomial time. We present qualitative and quantitative analysis to show the performance of the proposed method. PMID:26736778

  18. Medical image segmentation by combining graph cuts and oriented active appearance models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/. PMID:22311862

  19. How mutation affects evolutionary games on graphs.

    PubMed

    Allen, Benjamin; Traulsen, Arne; Tarnita, Corina E; Nowak, Martin A

    2012-04-21

    Evolutionary dynamics are affected by population structure, mutation rates and update rules. Spatial or network structure facilitates the clustering of strategies, which represents a mechanism for the evolution of cooperation. Mutation dilutes this effect. Here we analyze how mutation influences evolutionary clustering on graphs. We introduce new mathematical methods to evolutionary game theory, specifically the analysis of coalescing random walks via generating functions. These techniques allow us to derive exact identity-by-descent (IBD) probabilities, which characterize spatial assortment on lattices and Cayley trees. From these IBD probabilities we obtain exact conditions for the evolution of cooperation and other game strategies, showing the dual effects of graph topology and mutation rate. High mutation rates diminish the clustering of cooperators, hindering their evolutionary success. Our model can represent either genetic evolution with mutation, or social imitation processes with random strategy exploration.

  20. Stochastic models for horizontal gene transfer: taking a random walk through tree space.

    PubMed

    Suchard, Marc A

    2005-05-01

    Horizontal gene transfer (HGT) plays a critical role in evolution across all domains of life with important biological and medical implications. I propose a simple class of stochastic models to examine HGT using multiple orthologous gene alignments. The models function in a hierarchical phylogenetic framework. The top level of the hierarchy is based on a random walk process in "tree space" that allows for the development of a joint probabilistic distribution over multiple gene trees and an unknown, but estimable species tree. I consider two general forms of random walks. The first form is derived from the subtree prune and regraft (SPR) operator that mirrors the observed effects that HGT has on inferred trees. The second form is based on walks over complete graphs and offers numerically tractable solutions for an increasing number of taxa. The bottom level of the hierarchy utilizes standard phylogenetic models to reconstruct gene trees given multiple gene alignments conditional on the random walk process. I develop a well-mixing Markov chain Monte Carlo algorithm to fit the models in a Bayesian framework. I demonstrate the flexibility of these stochastic models to test competing ideas about HGT by examining the complexity hypothesis. Using 144 orthologous gene alignments from six prokaryotes previously collected and analyzed, Bayesian model selection finds support for (1) the SPR model over the alternative form, (2) the 16S rRNA reconstruction as the most likely species tree, and (3) increased HGT of operational genes compared to informational genes.

  1. Random matrix model of adiabatic quantum computing

    SciTech Connect

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-05-15

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.

  2. Evaluation of connectedness between the University courses of Physics and Chemistry basing on the graph model of intersubject links

    NASA Astrophysics Data System (ADS)

    Gnitetskaya, Tatyana; Ivanova, Elena

    2016-08-01

    An application of the graph model of inter-subject links to University courses of Physics and Chemistry is presented in this article. A part of inter-subject space with directions of inter-subject links from Physics to Chemistry in the group of physical concepts has been shown. The graph model of inter-subject links includes quantitative indicators. Its numerical values are given in the article. The degree of connectedness between the data of Physics and Chemistry courses is discussed for the courses considered. The effect of the courses placement within a curriculum on the value of their connectedness is shown. The placement of courses within a curriculum can provide the study of the courses at the same time or consecutive study, when one course precedes another.

  3. Modeling Alternative Splicing Variants from RNA-Seq Data with Isoform Graphs

    PubMed Central

    Beretta, Stefano; Vedova, Gianluca Della; Pirola, Yuri; Rizzi, Raffaella

    2014-01-01

    Abstract Next-generation sequencing (NGS) technologies need new methodologies for alternative splicing (AS) analysis. Current computational methods for AS analysis from NGS data are mainly based on aligning short reads against a reference genome, while methods that do not need a reference genome are mostly underdeveloped. In this context, the main developed tools for NGS data focus on de novo transcriptome assembly (Grabherr et al., 2011; Schulz et al., 2012). While these tools are extensively applied for biological investigations and often show intrinsic shortcomings from the obtained results, a theoretical investigation of the inherent computational limits of transcriptome analysis from NGS data, when a reference genome is unknown or highly unreliable, is still missing. On the other hand, we still lack methods for computing the gene structures due to AS events under the above assumptions—a problem that we start to tackle with this article. More precisely, based on the notion of isoform graph (Lacroix et al., 2008), we define a compact representation of gene structures—called splicing graph—and investigate the computational problem of building a splicing graph that is (i) compatible with NGS data and (ii) isomorphic to the isoform graph. We characterize when there is only one representative splicing graph compatible with input data, and we propose an efficient algorithmic approach to compute this graph. PMID:24200390

  4. Ancestry assessment using random forest modeling.

    PubMed

    Hefner, Joseph T; Spradley, M Kate; Anderson, Bruce

    2014-05-01

    A skeletal assessment of ancestry relies on morphoscopic traits and skeletal measurements. Using a sample of American Black (n = 38), American White (n = 39), and Southwest Hispanics (n = 72), the present study investigates whether these data provide similar biological information and combines both data types into a single classification using a random forest model (RFM). Our results indicate that both data types provide similar information concerning the relationships among population groups. Also, by combining both in an RFM, the correct allocation of ancestry for an unknown cranium increases. The distribution of cross-validated grouped cases correctly classified using discriminant analyses and RFMs ranges between 75.4% (discriminant function analysis, morphoscopic data only) and 89.6% (RFM). Unlike the traditional, experience-based approach using morphoscopic traits, the inclusion of both data types in a single analysis is a quantifiable approach accounting for more variation within and between groups, reducing misclassification rates, and capturing aspects of cranial shape, size, and morphology. PMID:24502438

  5. Model ecosystems with random nonlinear interspecies interactions.

    PubMed

    Santos, Danielle O C; Fontanari, José F

    2004-12-01

    The principle of competitive exclusion in ecology establishes that two species living together cannot occupy the same ecological niche. Here we present a model ecosystem in which the species are described by a series of phenotypic characters and the strength of the competition between two species is given by a nondecreasing (modulating) function of the number of common characters. Using analytical tools of statistical mechanics we find that the ecosystem diversity, defined as the fraction of species that coexist at equilibrium, decreases as the complexity (i.e., number of characters) of the species increases, regardless of the modulating function. By considering both selective and random elimination of the links in the community web, we show that ecosystems composed of simple species are more robust than those composed of complex species. In addition, we show that the puzzling result that there exists either rich or poor ecosystems for a linear modulating function is not typical of communities in which the interspecies interactions are determined by a complementarity rule.

  6. Random effects logistic models for analysing efficacy of a longitudinal randomized treatment with non-adherence.

    PubMed

    Small, Dylan S; Ten Have, Thomas R; Joffe, Marshall M; Cheng, Jing

    2006-06-30

    We present a random effects logistic approach for estimating the efficacy of treatment for compliers in a randomized trial with treatment non-adherence and longitudinal binary outcomes. We use our approach to analyse a primary care depression intervention trial. The use of a random effects model to estimate efficacy supplements intent-to-treat longitudinal analyses based on random effects logistic models that are commonly used in primary care depression research. Our estimation approach is an extension of Nagelkerke et al.'s instrumental variables approximation for cross-sectional binary outcomes. Our approach is easily implementable with standard random effects logistic regression software. We show through a simulation study that our approach provides reasonably accurate inferences for the setting of the depression trial under model assumptions. We also evaluate the sensitivity of our approach to model assumptions for the depression trial.

  7. The Random-Effect Generalized Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wu, Shiu-Lien

    2011-01-01

    Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…

  8. On the spectral distribution of distance-k graph of free product graphs

    NASA Astrophysics Data System (ADS)

    Arizmendi, Octavio; Gaxiola, Tulio

    2016-08-01

    We calculate the distribution with respect to the vacuum state of the distance-k graph of a d-regular tree. From this result we show that the distance-k graph of a d-regular graphs converges to the distribution of the distance-k graph of a regular tree. Finally, we prove that, properly normalized, the asymptotic distributions of distance-k graphs of the d-fold free product graph, as d tends to infinity, is given by the distribution of Pk(s), where s is a semicirlce random variable and Pk is the kth Chebychev polynomial.

  9. Quantifying randomness in real networks

    NASA Astrophysics Data System (ADS)

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  10. Quantifying randomness in real networks.

    PubMed

    Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-20

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  11. Quantifying randomness in real networks

    PubMed Central

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-01-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121

  12. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree. PMID:27176257

  13. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  14. Extracting Between-Pathway Models from E-MAP Interactions Using Expected Graph Compression

    NASA Astrophysics Data System (ADS)

    Kelley, David R.; Kingsford, Carl

    Genetic interactions (such as synthetic lethal interactions) have become quantifiable on a large-scale using the epistatic miniarray profile (E-MAP) method. An E-MAP allows the construction of a large, weighted network of both aggravating and alleviating genetic interactions between genes. By clustering genes into modules and establishing relationships between those modules, we can discover compensatory pathways. We introduce a general framework for applying greedy clustering heuristics to probabilistic graphs. We use this framework to apply a graph clustering method called graph summarization to an E-MAP that targets yeast chromosome biology. This results in a new method for clustering E-MAP data that we call Expected Graph Compression (EGC). We validate modules and compensatory pathways using enriched Gene Ontology annotations and a novel method based on correlated gene expression. EGC finds a number of modules that are not found by any previous methods to cluster E-MAP data. EGC also uncovers core submodules contained within several previously found modules, suggesting that EGC can reveal the finer structure of E-MAP networks.

  15. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  16. Static cooperator-defector patterns in models of the snowdrift game played on cycle graphs

    NASA Astrophysics Data System (ADS)

    Laird, Robert A.

    2013-07-01

    Evolutionary graph theory is an extension of evolutionary game theory in which each individual agent, represented by a node, interacts only with a subset of the entire population to which it belongs (i.e., those to which it is connected by edges). In the context of the evolution of cooperation, in which individuals playing the cooperator strategy interact with individuals playing the defector strategy and game payoffs are equated with fitness, evolutionary games on graphs lead to global standoffs (i.e., static patterns) when all cooperators in a population have the same payoff as any defectors with which they share an edge. I consider the simplest type of regular-connected graph, the cycle graph, in which every node has exactly two edges (k = 2), for the prisoner's dilemma game and the snowdrift game, the two most important pairwise games in cooperation theory. I show that for simplified payoff structures associated with these games, standoffs are only possible for two valid cost-benefit ratios in the snowdrift game. I further show that only the greater of these two cost-benefit ratios is likely to be attracting in most situations (i.e., likely to spontaneously result in a global standoff when starting from nonstandoff conditions). Numerical simulations confirm this prediction. This work contributes to our understanding of the evolution of pattern formation in games played in finite, sparsely connected populations.

  17. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  18. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    PubMed

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  19. Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems

    SciTech Connect

    Hong, Seokyong; Sukumar, Sreenivas Rangan; Vatsavai, Raju

    2016-01-01

    Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons: multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.

  20. Random effects and shrinkage estimation in capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2002-01-01

    We discuss the analysis of random effects in capture-recapture models, and outline Bayesian and frequentists approaches to their analysis. Under a normal model, random effects estimators derived from Bayesian or frequentist considerations have a common form as shrinkage estimators. We discuss some of the difficulties of analysing random effects using traditional methods, and argue that a Bayesian formulation provides a rigorous framework for dealing with these difficulties. In capture-recapture models, random effects may provide a parsimonious compromise between constant and completely time-dependent models for the parameters (e.g. survival probability). We consider application of random effects to band-recovery models, although the principles apply to more general situations, such as Cormack-Jolly-Seber models. We illustrate these ideas using a commonly analysed band recovery data set.

  1. "Improved Geometric Network Model" (IGNM): a novel approach for deriving Connectivity Graphs for Indoor Navigation

    NASA Astrophysics Data System (ADS)

    Mortari, F.; Zlatanova, S.; Liu, L.; Clementini, E.

    2014-04-01

    Over the past few years Personal Navigation Systems have become an established tool for route planning, but they are mainly designed for outdoor environments. Indoor navigation is still a challenging research area for several reasons: positioning is not very accurate, users can freely move between the interior boundaries of buildings, path network construction process may not be easy and straightforward due to complexity of indoor space configurations. Therefore the creation of a good network is essential for deriving overall connectivity of a building and for representing position of objects within the environment. This paper reviews current approaches to automatic derivation of route graphs for indoor navigation and discusses some of their limitations. Then, it introduces a novel algorithmic strategy for extracting a 3D connectivity graph for indoor navigation based on 2D floor plans.

  2. A Model for Random Student Drug Testing

    ERIC Educational Resources Information Center

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  3. Hyperspectral target detection using graph theory models and manifold geometry via an adaptive implementation of locally linear embedding

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.

    2014-06-01

    Hyperspectral images comprise, by design, high dimensional image data. However, research has shown that for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of non-linear manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data. With graph theory and manifold learning based models, the only assumption is that the data reside on an underlying manifold. In previous publications, we have shown that manifold coordinate approximation using locally linear embedding (LLE) is a viable pre-processing step for target detection with the Adaptive Cosine/Coherence Estimator (ACE) algorithm. Here, we improve upon that methodology using a more rigorous, data-driven implementation of LLE that incorporates the injection of a cloud" of target pixels and the Spectral Angle Mapper (SAM) detector. The LLE algorithm, which holds that the data is locally linear, is typically governed by a user defined parameter k, indicating the number of nearest neighbors to use in the initial graph model. We use an adaptive approach to building the graph that is governed by the data itself and does not rely upon user input. This implementation of LLE can yield greater separation between the target pixels and the background pixels in the manifold space. We present an analysis of target detection performance in the manifold coordinates using scene-derived target spectra and laboratory-measured target spectra across two different data sets.

  4. A Novel Model for DNA Sequence Similarity Analysis Based on Graph Theory

    PubMed Central

    Qi, Xingqin; Wu, Qin; Zhang, Yusen; Fuller, Eddie; Zhang, Cun-Quan

    2011-01-01

    Determination of sequence similarity is one of the major steps in computational phylogenetic studies. As we know, during evolutionary history, not only DNA mutations for individual nucleotide but also subsequent rearrangements occurred. It has been one of major tasks of computational biologists to develop novel mathematical descriptors for similarity analysis such that various mutation phenomena information would be involved simultaneously. In this paper, different from traditional methods (eg, nucleotide frequency, geometric representations) as bases for construction of mathematical descriptors, we construct novel mathematical descriptors based on graph theory. In particular, for each DNA sequence, we will set up a weighted directed graph. The adjacency matrix of the directed graph will be used to induce a representative vector for DNA sequence. This new approach measures similarity based on both ordering and frequency of nucleotides so that much more information is involved. As an application, the method is tested on a set of 0.9-kb mtDNA sequences of twelve different primate species. All output phylogenetic trees with various distance estimations have the same topology, and are generally consistent with the reported results from early studies, which proves the new method’s efficiency; we also test the new method on a simulated data set, which shows our new method performs better than traditional global alignment method when subsequent rearrangements happen frequently during evolutionary history. PMID:22065497

  5. Helping Students Make Sense of Graphs: An Experimental Trial of SmartGraphs Software

    NASA Astrophysics Data System (ADS)

    Zucker, Andrew; Kay, Rachel; Staudt, Carolyn

    2014-06-01

    Graphs are commonly used in science, mathematics, and social sciences to convey important concepts; yet students at all ages demonstrate difficulties interpreting graphs. This paper reports on an experimental study of free, Web-based software called SmartGraphs that is specifically designed to help students overcome their misconceptions regarding graphs. SmartGraphs allows students to interact with graphs and provides hints and scaffolding to help students, if they need help. SmartGraphs activities can be authored to be useful in teaching and learning a variety of topics that use graphs (such as slope, velocity, half-life, and global warming). A 2-year experimental study in physical science classrooms was conducted with dozens of teachers and thousands of students. In the first year, teachers were randomly assigned to experimental or control conditions. Data show that students of teachers who use SmartGraphs as a supplement to normal instruction make greater gains understanding graphs than control students studying the same content using the same textbooks, but without SmartGraphs. Additionally, teachers believe that the SmartGraphs activities help students meet learning goals in the physical science course, and a great majority reported they would use the activities with students again. In the second year of the study, several specific variations of SmartGraphs were researched to help determine what makes SmartGraphs effective.

  6. Detecting labor using graph theory on connectivity matrices of uterine EMG.

    PubMed

    Al-Omar, S; Diab, A; Nader, N; Khalil, M; Karlsson, B; Marque, C

    2015-08-01

    Premature labor is one of the most serious health problems in the developed world. One of the main reasons for this is that no good way exists to distinguish true labor from normal pregnancy contractions. The aim of this paper is to investigate if the application of graph theory techniques to multi-electrode uterine EMG signals can improve the discrimination between pregnancy contractions and labor. To test our methods we first applied them to synthetic graphs where we detected some differences in the parameters results and changes in the graph model from pregnancy-like graphs to labor-like graphs. Then, we applied the same methods to real signals. We obtained the best differentiation between pregnancy and labor through the same parameters. Major improvements in differentiating between pregnancy and labor were obtained using a low pass windowing preprocessing step. Results show that real graphs generally became more organized when moving from pregnancy, where the graph showed random characteristics, to labor where the graph became a more small-world like graph.

  7. Algebraic distance on graphs.

    SciTech Connect

    Chen, J.; Safro, I.

    2011-01-01

    Measuring the connection strength between a pair of vertices in a graph is one of the most important concerns in many graph applications. Simple measures such as edge weights may not be sufficient for capturing the effects associated with short paths of lengths greater than one. In this paper, we consider an iterative process that smooths an associated value for nearby vertices, and we present a measure of the local connection strength (called the algebraic distance; see [D. Ron, I. Safro, and A. Brandt, Multiscale Model. Simul., 9 (2011), pp. 407-423]) based on this process. The proposed measure is attractive in that the process is simple, linear, and easily parallelized. An analysis of the convergence property of the process reveals that the local neighborhoods play an important role in determining the connectivity between vertices. We demonstrate the practical effectiveness of the proposed measure through several combinatorial optimization problems on graphs and hypergraphs.

  8. Links between topology of the transition graph and limit cycles in a two-dimensional piecewise affine biological model.

    PubMed

    Abou-Jaoudé, Wassim; Chaves, Madalena; Gouzé, Jean-Luc

    2014-12-01

    A class of piecewise affine differential (PWA) models, initially proposed by Glass and Kauffman (in J Theor Biol 39:103-129, 1973), has been widely used for the modelling and the analysis of biological switch-like systems, such as genetic or neural networks. Its mathematical tractability facilitates the qualitative analysis of dynamical behaviors, in particular periodic phenomena which are of prime importance in biology. Notably, a discrete qualitative description of the dynamics, called the transition graph, can be directly associated to this class of PWA systems. Here we present a study of periodic behaviours (i.e. limit cycles) in a class of two-dimensional piecewise affine biological models. Using concavity and continuity properties of Poincaré maps, we derive structural principles linking the topology of the transition graph to the existence, number and stability of limit cycles. These results notably extend previous works on the investigation of structural principles to the case of unequal and regulated decay rates for the 2-dimensional case. Some numerical examples corresponding to minimal models of biological oscillators are treated to illustrate the use of these structural principles.

  9. Human sexual contact network as a bipartite graph

    NASA Astrophysics Data System (ADS)

    Ergün, Güler

    2002-05-01

    A simple model to encapsulate the essential growth properties of the web of human sexual contacts is presented. In the model only heterosexual connection is considered and represented by a random growing bipartite graph where both male-female contact networks grow simultaneously. The time evolution of the model is analysed by a rate equation approach leading to confirm that male and female sexual contact distributions decay as power laws with exponents depending on influx and charisma of the sexes.

  10. Automatic prone to supine haustral fold matching in CT colonography using a Markov random field model.

    PubMed

    Hampshire, Thomas; Roth, Holger; Hu, Mingxing; Boone, Darren; Slabaugh, Greg; Punwani, Shonit; Halligan, Steve; Hawkes, David

    2011-01-01

    CT colonography is routinely performed with the patient prone and supine to differentiate fixed colonic pathology from mobile faecal residue. We propose a novel method to automatically establish correspondence. Haustral folds are detected using a graph cut method applied to a surface curvature-based metric, where image patches are generated using endoluminal CT colonography surface rendering. The intensity difference between image pairs, along with additional neighbourhood information to enforce geometric constraints, are used with a Markov Random Field (MRF) model to estimate the fold labelling assignment. The method achieved fold matching accuracy of 83.1% and 88.5% with and without local colonic collapse. Moreover, it improves an existing surface-based registration algorithm, decreasing mean registration error from 9.7mm to 7.7mm in cases exhibiting collapse.

  11. [Local population of Eritrichium caucasicum as an object of mathematical modelling. I. Life cycle graph and a nonautonomous matrix model].

    PubMed

    Logofet, D O; Belova, I N; Kazantseva, E S; Onipchenko, V G

    2016-01-01

    For the plant species, which is considered a short-lived perennial, we have composed a scale of ontogenetic stages and the life cycle graph (LCG) according to annual observations on permanent sample plots in an Alpine lichen heath during the 2009-2014 period. The LCG that reflects seed reproduction has been reduced to the one that avoids the stage of soil seed bank, yet preserves the arcs of annual recruitment. The corresponding matrix model of stage-structured population dynamics has four stages: juvenile plants (including seedlings), virginal, generative, and 'terminally generative' (the plants die after seed production). Model calibration reduces to directly calculating the rates of transition between stages and those of delays within stages from the data of only one time step, while keeping the two reproduction rates uncertain, yet confined to the quantitative bounds of observed recruitment. This has enabled us to determine a feasible range for the dominant eigenvalue of the model matrix, i.e., the quantitative bounds for the measure of how the local population adapts to its environment, at each of the five time steps, resulting in aformally nonautonomous model. To obtain 'age-specific parameters' from a stage-classified model, we have applied the technique that constructs a virtual absorbing Markov chain and calculates its fundamental matrix. In a nonautonomous model, the estimates of life expectancy also depend on the time of observation (that fixes certain environmental conditions), and vary from two to nearly seven years. The estimates reveal how specifically short lives the short-lived perennial, while their range motivates the task to average the model matrices over the whole period of observation. The model indicates that Eritrichium caucasicum plants spend the most part of their life span in the virginal stage under each of the environment conditions observed, thus revealing the place retention strategy by C. K6rner (2003), or the delayed

  12. [Local population of Eritrichium caucasicum as an object of mathematical modelling. I. Life cycle graph and a nonautonomous matrix model].

    PubMed

    Logofet, D O; Belova, I N; Kazantseva, E S; Onipchenko, V G

    2016-01-01

    For the plant species, which is considered a short-lived perennial, we have composed a scale of ontogenetic stages and the life cycle graph (LCG) according to annual observations on permanent sample plots in an Alpine lichen heath during the 2009-2014 period. The LCG that reflects seed reproduction has been reduced to the one that avoids the stage of soil seed bank, yet preserves the arcs of annual recruitment. The corresponding matrix model of stage-structured population dynamics has four stages: juvenile plants (including seedlings), virginal, generative, and 'terminally generative' (the plants die after seed production). Model calibration reduces to directly calculating the rates of transition between stages and those of delays within stages from the data of only one time step, while keeping the two reproduction rates uncertain, yet confined to the quantitative bounds of observed recruitment. This has enabled us to determine a feasible range for the dominant eigenvalue of the model matrix, i.e., the quantitative bounds for the measure of how the local population adapts to its environment, at each of the five time steps, resulting in aformally nonautonomous model. To obtain 'age-specific parameters' from a stage-classified model, we have applied the technique that constructs a virtual absorbing Markov chain and calculates its fundamental matrix. In a nonautonomous model, the estimates of life expectancy also depend on the time of observation (that fixes certain environmental conditions), and vary from two to nearly seven years. The estimates reveal how specifically short lives the short-lived perennial, while their range motivates the task to average the model matrices over the whole period of observation. The model indicates that Eritrichium caucasicum plants spend the most part of their life span in the virginal stage under each of the environment conditions observed, thus revealing the place retention strategy by C. K6rner (2003), or the delayed

  13. Sums of random matrices and the Potts model on random planar maps

    NASA Astrophysics Data System (ADS)

    Atkin, Max R.; Niedner, Benjamin; Wheater, John F.

    2016-05-01

    We compute the partition function of the q-states Potts model on a random planar lattice with p≤slant q allowed, equally weighted colours on a connected boundary. To this end, we employ its matrix model representation in the planar limit, generalising a result by Voiculescu for the addition of random matrices to a situation beyond free probability theory. We show that the partition functions with p and q - p colours on the boundary are related algebraically. Finally, we investigate the phase diagram of the model when 0≤slant q≤slant 4 and comment on the conformal field theory description of the critical points.

  14. A Clustering Graph Generator

    SciTech Connect

    Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey

    2015-10-26

    In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps to understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.

  15. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  16. A numerical study of the 3D random interchange and random loop models

    NASA Astrophysics Data System (ADS)

    Barp, Alessandro; Barp, Edoardo Gabriele; Briol, François-Xavier; Ueltschi, Daniel

    2015-08-01

    We have studied numerically the random interchange model and related loop models on the three-dimensional cubic lattice. We have determined the transition time for the occurrence of long loops. The joint distribution of the lengths of long loops is Poisson-Dirichlet with parameter 1 or \\frac{1}{2}.

  17. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  18. Fluorescence microscopy image noise reduction using a stochastically-connected random field model

    PubMed Central

    Haider, S. A.; Cameron, A.; Siva, P.; Lui, D.; Shafiee, M. J.; Boroomand, A.; Haider, N.; Wong, A.

    2016-01-01

    Fluorescence microscopy is an essential part of a biologist’s toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise. PMID:26884148

  19. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  20. Superstatistical analysis and modelling of heterogeneous random walks

    PubMed Central

    Metzner, Claus; Mark, Christoph; Steinwachs, Julian; Lautscham, Lena; Stadler, Franz; Fabry, Ben

    2015-01-01

    Stochastic time series are ubiquitous in nature. In particular, random walks with time-varying statistical properties are found in many scientific disciplines. Here we present a superstatistical approach to analyse and model such heterogeneous random walks. The time-dependent statistical parameters can be extracted from measured random walk trajectories with a Bayesian method of sequential inference. The distributions and correlations of these parameters reveal subtle features of the random process that are not captured by conventional measures, such as the mean-squared displacement or the step width distribution. We apply our new approach to migration trajectories of tumour cells in two and three dimensions, and demonstrate the superior ability of the superstatistical method to discriminate cell migration strategies in different environments. Finally, we show how the resulting insights can be used to design simple and meaningful models of the underlying random processes. PMID:26108639

  1. Hyperspectral Data Classification Using Factor Graphs

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Müller, R.; Palubinskas, G.; Reinartz, P.

    2012-07-01

    Accurate classification of hyperspectral data is still a competitive task and new classification methods are developed to achieve desired tasks of hyperspectral data use. The objective of this paper is to develop a new method for hyperspectral data classification ensuring the classification model properties like transferability, generalization, probabilistic interpretation, etc. While factor graphs (undirected graphical models) are unfortunately not widely employed in remote sensing tasks, these models possess important properties such as representation of complex systems to model estimation/decision making tasks. In this paper we present a new method for hyperspectral data classification using factor graphs. Factor graph (a bipartite graph consisting of variables and factor vertices) allows factorization of a more complex function leading to definition of variables (employed to store input data), latent variables (allow to bridge abstract class to data), and factors (defining prior probabilities for spectral features and abstract classes; input data mapping to spectral features mixture and further bridging of the mixture to an abstract class). Latent variables play an important role by defining two-level mapping of the input spectral features to a class. Configuration (learning) on training data of the model allows calculating a parameter set for the model to bridge the input data to a class. The classification algorithm is as follows. Spectral bands are separately pre-processed (unsupervised clustering is used) to be defined on a finite domain (alphabet) leading to a representation of the data on multinomial distribution. The represented hyperspectral data is used as input evidence (evidence vector is selected pixelwise) in a configured factor graph and an inference is run resulting in the posterior probability. Variational inference (Mean field) allows to obtain plausible results with a low calculation time. Calculating the posterior probability for each class

  2. Boosting for multi-graph classification.

    PubMed

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Cai, Zhihua

    2015-03-01

    In this paper, we formulate a novel graph-based learning problem, multi-graph classification (MGC), which aims to learn a classifier from a set of labeled bags each containing a number of graphs inside the bag. A bag is labeled positive, if at least one graph in the bag is positive, and negative otherwise. Such a multi-graph representation can be used for many real-world applications, such as webpage classification, where a webpage can be regarded as a bag with texts and images inside the webpage being represented as graphs. This problem is a generalization of multi-instance learning (MIL) but with vital differences, mainly because instances in MIL share a common feature space whereas no feature is available to represent graphs in a multi-graph bag. To solve the problem, we propose a boosting based multi-graph classification framework (bMGC). Given a set of labeled multi-graph bags, bMGC employs dynamic weight adjustment at both bag- and graph-levels to select one subgraph in each iteration as a weak classifier. In each iteration, bag and graph weights are adjusted such that an incorrectly classified bag will receive a higher weight because its predicted bag label conflicts to the genuine label, whereas an incorrectly classified graph will receive a lower weight value if the graph is in a positive bag (or a higher weight if the graph is in a negative bag). Accordingly, bMGC is able to differentiate graphs in positive and negative bags to derive effective classifiers to form a boosting model for MGC. Experiments and comparisons on real-world multi-graph learning tasks demonstrate the algorithm performance.

  3. Semantic graphs and associative memories.

    PubMed

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  4. Semantic graphs and associative memories

    NASA Astrophysics Data System (ADS)

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  5. Understanding Graphs & Charts.

    ERIC Educational Resources Information Center

    Cleary, John J.; Gravely, Mary Liles

    Developed by educators from the Emily Griffith Opportunity School, this teacher's guide was developed for a 4-hour workshop to teach employees how to read the charts and graphs they need in the workplace. The unit covers four types of graphs: pictographs, bar graphs, line graphs, and circle graphs. The guide is divided into four sections: reading…

  6. Universal spectral statistics in quantum graphs.

    PubMed

    Gnutzmann, Sven; Altland, Alexander

    2004-11-01

    We prove that the spectrum of an individual chaotic quantum graph shows universal spectral correlations, as predicted by random-matrix theory. The stability of these correlations with regard to nonuniversal corrections is analyzed in terms of the linear operator governing the classical dynamics on the graph.

  7. Models construction for acetone-butanol-ethanol fermentations with acetate/butyrate consecutively feeding by graph theory.

    PubMed

    Li, Zhigang; Shi, Zhongping; Li, Xin

    2014-05-01

    Several fermentations with consecutively feeding of acetate/butyrate were conducted in a 7 L fermentor and the results indicated that exogenous acetate/butyrate enhanced solvents productivities by 47.1% and 39.2% respectively, and changed butyrate/acetate ratios greatly. Then extracellular butyrate/acetate ratios were utilized for calculation of acids rates and the results revealed that acetate and butyrate formation pathways were almost blocked by corresponding acids feeding. In addition, models for acetate/butyrate feeding fermentations were constructed by graph theory based on calculation results and relevant reports. Solvents concentrations and butanol/acetone ratios of these fermentations were also calculated and the results of models calculation matched fermentation data accurately which demonstrated that models were constructed in a reasonable way.

  8. Random Models in the Classroom 1: An Example

    ERIC Educational Resources Information Center

    Selkirk, Keith

    1973-01-01

    A mathematical model of a soccer-league competition results is set up and investigated using simple randomization techniques. The hypothesis that winning the league championship was purely luck was statistically tested. (JP)

  9. A conceptual model for quantifying connectivity using graph theory and cellular (per-pixel) approach

    NASA Astrophysics Data System (ADS)

    Singh, Manudeo; Sinha, Rajiv; Tandon, Sampat K.

    2016-04-01

    pathways will show changes under different LULC conditions even if the slope remains the same. The graphical approach provides the statistics of connected and disconnected graph elements (edges, nodes) and graph components, thereby allowing the quantification of structural connectivity. This approach also quantifies the dynamic connectivity by allowing the measurement of the fluxes (e.g. via hydrographs or sedimentographs) at any node as well as at any system outlet. The contribution of any sub-system can be understood by removing the remaining sub-systems which can be conveniently achieved by masking associated graph elements.

  10. A Gompertzian model with random effects to cervical cancer growth

    SciTech Connect

    Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati

    2015-05-15

    In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.

  11. A Gompertzian model with random effects to cervical cancer growth

    NASA Astrophysics Data System (ADS)

    Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati

    2015-05-01

    In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.

  12. Rumor spreading models with random denials

    NASA Astrophysics Data System (ADS)

    Giorno, Virginia; Spina, Serena

    2016-11-01

    The concept of denial is introduced on rumor spreading processes. The denials occur with a certain rate and they reset to start the initial situation. A population of N individuals is subdivided into ignorants, spreaders and stiflers; at the initial time there is only one spreader and the rest of the population is ignorant. The denials are introduced in the classic DK model and in its generalization, in which a spreader can transmit the rumor at most to k ignorants. The steady state densities are analyzed for these models. Finally, a numerical analysis is performed to study the rule of the involved parameters and to compare the proposed models.

  13. A new efficient algorithm generating all minimal S-T cut-sets in a graph-modeled network

    NASA Astrophysics Data System (ADS)

    Malinowski, Jacek

    2016-06-01

    A new algorithm finding all minimal s-t cut-sets in a graph-modeled network with failing links and nodes is presented. It is based on the analysis of the tree of acyclic s-t paths connecting a given pair of nodes in the considered structure. The construction of such a tree is required by many existing algorithms for s-t cut-sets generation in order to eliminate "stub" edges or subgraphs through which no acyclic path passes. The algorithm operates on the acyclic paths tree alone, i.e. no other analysis of the network's topology is necessary. It can be applied to both directed and undirected graphs, as well as partly directed ones. It is worth noting that the cut-sets can be composed of both links and failures, while many known algorithms do not take nodes into account, which is quite restricting from the practical point of view. The developed cut-sets generation technique makes the algorithm significantly faster than most of the previous methods, as proved by the experiments.

  14. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification

    PubMed Central

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs). PMID:26985826

  15. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    PubMed

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  16. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes. PMID:26113811

  17. A random spatial network model based on elementary postulates

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1989-01-01

    In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. -from Authors

  18. Hybrid graphs as a framework for the small-world effect

    NASA Astrophysics Data System (ADS)

    Lehmann, Katharina A.; Post, Hendrik D.; Kaufmann, Michael

    2006-05-01

    In this paper we formalize the small-world effect which describes the surprising fact that a hybrid graph composed of a local graph component and a very sparse random graph has a diameter of O(lnn) whereby the diameter of both components alone is much higher. We show that a large family of these hybrid graphs shows this effect and that this generalized family also includes classic small-world models proposed by various authors although not all of them are captured by the small-world definition given by Watts and Strogatz. Furthermore, we give a detailed upper bound of the hybrid’s graph diameter for different choices of the expected number of random edges by applying a new kind of proof pattern that is applicable to a large number of hybrid graphs. The focus in this paper is on presenting a flexible family of hybrid graphs showing the small-world effect that can be tuned closely to real-world systems.

  19. The Random Walk Drainage Simulation Model as a Teaching Exercise

    ERIC Educational Resources Information Center

    High, Colin; Richards, Paul

    1972-01-01

    Practical instructions about using the random walk drainage network simulation model as a teaching excercise are given and the results discussed. A source of directional bias in the resulting simulated drainage patterns is identified and given an interpretation in the terms of the model. Three points of educational value concerning the model are…

  20. Hierarchical structure of the logical Internet graph

    NASA Astrophysics Data System (ADS)

    Ge, Zihui; Figueiredo, Daniel R.; Jaiswal, Sharad; Gao, Lixin

    2001-07-01

    The study of the Internet topology has recently received much attention from the research community. In particular, the observation that the network graph has interesting properties, such as power laws, that might be explored in a myriad of ways. Most of the work in characterizing the Internet graph is based on the physical network graph, i.e., the connectivity graph. In this paper we investigate how logical relationships between nodes of the AS graph can be used to gain insight to its structure. We characterize the logical graph using various metrics and identify the presence of power laws in the number of customers that a provider has. Using these logical relationships we define a structural model of the AS graph. The model highlights the hierarchical nature of logical relationships and the preferential connection to larger providers. We also investigate the consistency of this model over time and observe interesting properties of the hierarchical structure.

  1. Automatic Detection and Recognition of Man-Made Objects in High Resolution Remote Sensing Images Using Hierarchical Semantic Graph Model

    NASA Astrophysics Data System (ADS)

    Sun, X.; Thiele, A.; Hinz, S.; Fu, K.

    2013-05-01

    In this paper, we propose a hierarchical semantic graph model to detect and recognize man-made objects in high resolution remote sensing images automatically. Following the idea of part-based methods, our model builds a hierarchical possibility framework to explore both the appearance information and semantic relationships between objects and background. This multi-levels structure is promising to enable a more comprehensive understanding of natural scenes. After training local classifiers to calculate parts properties, we use belief propagation to transmit messages quantitatively, which could enhance the utilization of spatial constrains existed in images. Besides, discriminative learning and generative learning are combined interleavely in the inference procedure, to improve the training error and recognition efficiency. The experimental results demonstrate that this method is able to detect manmade objects in complicated surroundings with satisfactory precision and robustness.

  2. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  3. Preserving Differential Privacy in Degree-Correlation based Graph Generation

    PubMed Central

    Wang, Yue; Wu, Xintao

    2014-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987

  4. Preserving Differential Privacy in Degree-Correlation based Graph Generation.

    PubMed

    Wang, Yue; Wu, Xintao

    2013-08-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model.

  5. Random walks on networks

    NASA Astrophysics Data System (ADS)

    Donnelly, Isaac

    Random walks on lattices are a well used model for diffusion on continuum. They have been to model subdiffusive systems, systems with forcing and reactions as well as a combination of the three. We extend the traditional random walk framework to the network to obtain novel results. As an example due to the small graph diameter, the early time behaviour of subdiffusive dynamics dominates the observed system which has implications for models of the brain or airline networks. I would like to thank the Australian American Fulbright Association.

  6. Graphing the order of the sexes: constructing, recalling, interpreting, and putting the self in gender difference graphs.

    PubMed

    Hegarty, Peter; Lemieux, Anthony F; McQueen, Grant

    2010-03-01

    Graphs seem to connote facts more than words or tables do. Consequently, they seem unlikely places to spot implicit sexism at work. Yet, in 6 studies (N = 741), women and men constructed (Study 1) and recalled (Study 2) gender difference graphs with men's data first, and graphed powerful groups (Study 3) and individuals (Study 4) ahead of weaker ones. Participants who interpreted graph order as evidence of author "bias" inferred that the author graphed his or her own gender group first (Study 5). Women's, but not men's, preferences to graph men first were mitigated when participants graphed a difference between themselves and an opposite-sex friend prior to graphing gender differences (Study 6). Graph production and comprehension are affected by beliefs and suppositions about the groups represented in graphs to a greater degree than cognitive models of graph comprehension or realist models of scientific thinking have yet acknowledged.

  7. The Volume Regulation Graph versus the Ejection Fraction as Metrics of Left Ventricular Performance in Heart Failure with and without a Preserved Ejection Fraction: A Mathematical Model Study

    PubMed Central

    Faes, Theo JC; Kerkhof, Peter LM

    2015-01-01

    In left ventricular heart failure, often a distinction is made between patients with a reduced and a preserved ejection fraction (EF). As EF is a composite metric of both the end-diastolic volume (EDV) and the end-systolic ventricular volume (ESV), the lucidity of the EF is sometimes questioned. As an alternative, the ESV–EDV graph is advocated. This study identifies the dependence of the EF and the EDV–ESV graph on the major determinants of ventricular performance. Numerical simulations were made using a model of the systemic circulation, consisting of an atrium–ventricle valves combination; a simple constant pressure as venous filling system; and a three-element Windkessel extended with a venous system. ESV–EDV graphs and EFs were calculated using this model while varying one by one the filling pressure, diastolic and systolic ventricular elastances, and diastolic pressure in the aorta. In conclusion, the ESV–EDV graph separates between diastolic and systolic dysfunction while the EF encompasses these two pathologies. Therefore, the ESV–EDV graph can provide an advantage over EF in heart failure studies. PMID:26052232

  8. Relativistic diffusion processes and random walk models

    SciTech Connect

    Dunkel, Joern; Talkner, Peter; Haenggi, Peter

    2007-02-15

    The nonrelativistic standard model for a continuous, one-parameter diffusion process in position space is the Wiener process. As is well known, the Gaussian transition probability density function (PDF) of this process is in conflict with special relativity, as it permits particles to propagate faster than the speed of light. A frequently considered alternative is provided by the telegraph equation, whose solutions avoid superluminal propagation speeds but suffer from singular (noncontinuous) diffusion fronts on the light cone, which are unlikely to exist for massive particles. It is therefore advisable to explore other alternatives as well. In this paper, a generalized Wiener process is proposed that is continuous, avoids superluminal propagation, and reduces to the standard Wiener process in the nonrelativistic limit. The corresponding relativistic diffusion propagator is obtained directly from the nonrelativistic Wiener propagator, by rewriting the latter in terms of an integral over actions. The resulting relativistic process is non-Markovian, in accordance with the known fact that nontrivial continuous, relativistic Markov processes in position space cannot exist. Hence, the proposed process defines a consistent relativistic diffusion model for massive particles and provides a viable alternative to the solutions of the telegraph equation.

  9. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  10. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  11. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular "graph convolutions", a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph---atoms, bonds, distances, etc.---which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  12. Money creation process in a random redistribution model

    NASA Astrophysics Data System (ADS)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  13. The melting phenomenon in random-walk model of DNA

    SciTech Connect

    Hayrapetyan, G. N.; Mamasakhlisov, E. Sh.; Papoyan, Vl. V.; Poghosyan, S. S.

    2012-10-15

    The melting phenomenon in a double-stranded homopolypeptide is considered. The relative distance between the corresponding monomers of two polymer chains is modeled by the two-dimensional random walk on the square lattice. Returns of the random walk to the origin describe the formation of hydrogen bonds between complementary units. To take into account the two competing interactions of monomers inside the chains, we obtain a completely denatured state at finite temperature T{sub c}.

  14. Simulation of Radar Rainfall Fields: A Random Error Model

    NASA Astrophysics Data System (ADS)

    Aghakouchak, A.; Habib, E.; Bardossy, A.

    2008-12-01

    Precipitation is a major input in hydrological and meteorological models. It is believed that uncertainties due to input data will propagate in modeling hydrologic processes. Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. The superposition of random errors of different sources is one of the main factors in uncertainty of radar estimates. One way to express these uncertainties is to stochastically generate random error fields to impose them on radar measurements in order to obtain an ensemble of radar rainfall estimates. In the method introduced here, the random error consists of two components: purely random error and dependent error on the indicator variable. Model parameters of the error model are estimated using a heteroscedastic maximum likelihood model in order to account for variance heterogeneity in radar rainfall error estimates. When reflectivity values are considered, the exponent and multiplicative factor of the Z-R relationship are estimated simultaneously with the model parameters. The presented model performs better compared to the previous approaches that generally result in unaccounted heteroscedasticity in error fields and thus radar ensemble.

  15. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  16. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.

    1994-01-01

    A methodology for simulation of molecular mixing and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and non-reacting shear layer present in the facility given basic assumptions about turbulence properties.

  17. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  18. Multi-channel MRI segmentation with graph cuts using spectral gradient and multidimensional Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Lecoeur, Jérémy; Ferré, Jean-Christophe; Collins, D. Louis; Morrisey, Sean P.; Barillot, Christian

    2009-02-01

    A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence. Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI. Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification). As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%. We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.

  19. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  20. Fitting Partially Nonlinear Random Coefficient Models as SEMs

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Cudeck, Robert; du Toit, Stephen H. C.

    2006-01-01

    The nonlinear random coefficient model has become increasingly popular as a method for describing individual differences in longitudinal research. Although promising, the nonlinear model it is not utilized as often as it might be because software options are still somewhat limited. In this article we show that a specialized version of the model…

  1. Graph anomalies in cyber communications

    SciTech Connect

    Vander Wiel, Scott A; Storlie, Curtis B; Sandine, Gary; Hagberg, Aric A; Fisk, Michael

    2011-01-11

    Enterprises monitor cyber traffic for viruses, intruders and stolen information. Detection methods look for known signatures of malicious traffic or search for anomalies with respect to a nominal reference model. Traditional anomaly detection focuses on aggregate traffic at central nodes or on user-level monitoring. More recently, however, traffic is being viewed more holistically as a dynamic communication graph. Attention to the graph nature of the traffic has expanded the types of anomalies that are being sought. We give an overview of several cyber data streams collected at Los Alamos National Laboratory and discuss current work in modeling the graph dynamics of traffic over the network. We consider global properties and local properties within the communication graph. A method for monitoring relative entropy on multiple correlated properties is discussed in detail.

  2. [Some exact results for random walk models with applications].

    PubMed

    Schwarz, W

    1989-01-01

    This article presents a random walk model that can be analyzed without recourse to Wald's (1947) approximation, which neglects the excess over the absorbing barriers. Hence, the model yields exact predictions for the absorption probabilities and all mean conditional absorption times. We derive these predictions in some detail and fit them to the extensive data of an identification experiment published by Green et al. (1983). The fit of the model seems satisfactory. The relationship of the model to existing classes of random walk models (SPRT and SSR; see Luce, 1986) is discussed; for certain combinations of its parameters, the model belongs either to the SPRT or to the SSR class, or to both. We stress the theoretical significance of the knowledge of exact results for the evaluation of Wald's approximation and general properties of the several models proposed derived from this approximation.

  3. The Effect of Random Voids in the Modified Gurson Model

    NASA Astrophysics Data System (ADS)

    Fei, Huiyang; Yazzie, Kyle; Chawla, Nikhilesh; Jiang, Hanqing

    2012-02-01

    The porous plasticity model (usually referred to as the Gurson-Tvergaard-Needleman model or modified Gurson model) has been widely used in the study of microvoid-induced ductile fracture. In this paper, we studied the effects of random voids on the porous plasticity model. Finite-element simulations were conducted to study a copper/tin/copper joint bar under uniaxial tension using the commercial finite-element package ABAQUS. A randomly distributed initial void volume fraction with different types of distribution was introduced, and the effects of this randomness on the crack path and macroscopic stress-strain behavior were studied. It was found that consideration of the random voids is able to capture more detailed and localized deformation features, such as different crack paths and different ultimate tensile strengths, and meanwhile does not change the macroscopic stress-strain behavior. It seems that the random voids are able to qualitatively explain the scattered observations in experiments while keeping the macroscopic measurements consistent.

  4. Scattering model for quantum random walks on a hypercube

    SciTech Connect

    Kosik, Jozef; Buzek, Vladimir

    2005-01-01

    Following a recent work by Hillery et al. [Phys. Rev. A 68, 032314 (2003)], we introduce a scattering model of a quantum random walk (SQRW) on a hybercube. We show that this type of quantum random walk can be reduced to the quantum random walk on the line and we derive the corresponding hitting amplitudes. We investigate the scattering properties of the hypercube, connected to the semi-infinite tails. We prove that the SQRW is a generalized version of the coined quantum random walk. We show how to implement the SQRW efficiently using a quantum circuit with standard gates. We discuss one possible version of a quantum search algorithm using the SQRW. Finally, we analyze symmetries that underlie the SQRW and may simplify its solution considerably.

  5. On a programming language for graph algorithms

    NASA Technical Reports Server (NTRS)

    Rheinboldt, W. C.; Basili, V. R.; Mesztenyi, C. K.

    1971-01-01

    An algorithmic language, GRAAL, is presented for describing and implementing graph algorithms of the type primarily arising in applications. The language is based on a set algebraic model of graph theory which defines the graph structure in terms of morphisms between certain set algebraic structures over the node set and arc set. GRAAL is modular in the sense that the user specifies which of these mappings are available with any graph. This allows flexibility in the selection of the storage representation for different graph structures. In line with its set theoretic foundation, the language introduces sets as a basic data type and provides for the efficient execution of all set and graph operators. At present, GRAAL is defined as an extension of ALGOL 60 (revised) and its formal description is given as a supplement to the syntactic and semantic definition of ALGOL. Several typical graph algorithms are written in GRAAL to illustrate various features of the language and to show its applicability.

  6. Graphing Polar Curves

    ERIC Educational Resources Information Center

    Lawes, Jonathan F.

    2013-01-01

    Graphing polar curves typically involves a combination of three traditional techniques, all of which can be time-consuming and tedious. However, an alternative method--graphing the polar function on a rectangular plane--simplifies graphing, increases student understanding of the polar coordinate system, and reinforces graphing techniques learned…

  7. Graphing for Any Grade.

    ERIC Educational Resources Information Center

    Nibbelink, William

    1982-01-01

    An instructional sequence for teaching graphing that has been extensively field tested in kindergarten through grade six is detailed. The material begins with point graphs, employs a movable y-axis to begin with minimal clutter, and has graphs constructed before reading graphs is required. (MP)

  8. Using a CBL Unit, a Temperature Sensor, and a Graphing Calculator to Model the Kinetics of Consecutive First-Order Reactions as Safe In-Class Demonstrations

    ERIC Educational Resources Information Center

    Moore-Russo, Deborah A.; Cortes-Figueroa, Jose E.; Schuman, Michael J.

    2006-01-01

    The use of Calculator-Based Laboratory (CBL) technology, the graphing calculator, and the cooling and heating of water to model the behavior of consecutive first-order reactions is presented, where B is the reactant, I is the intermediate, and P is the product for an in-class demonstration. The activity demonstrates the spontaneous and consecutive…

  9. Using Random Forest Models to Predict Organizational Violence

    NASA Technical Reports Server (NTRS)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  10. Fredrickson-Andersen model on Bethe lattice with random pinning

    NASA Astrophysics Data System (ADS)

    Ikeda, Harukuni; Miyazaki, Kunimasa

    2015-10-01

    We study the effects of random pinning on the Fredrickson-Andersen model on the Bethe lattice. We find that the nonergodic transition temperature rises as the fraction of the pinned spins increases and the transition line terminates at a critical point. The freezing behavior of the spins is analogous to that of a randomly pinned p-spin mean-field spin glass model which has been recently reported. The diverging behavior of correlation lengths in the vicinity of the terminal critical point is found to be identical to the prediction of the inhomogeneous mode-coupling theory at the A 3 singularity point for the glass transition.

  11. Random-anisotropy Blume-Emery-Griffiths model

    NASA Technical Reports Server (NTRS)

    Maritan, Amos; Cieplak, Marek; Swift, Michael R.; Toigo, Flavio; Banavar, Jayanth R.

    1992-01-01

    The results are described of studies of a random-anisotropy Blume-Emery-Griffiths spin-1 Ising model using mean-field theory, transfer-matrix calculations, and position-space renormalization-group calculations. The interplay between the quenched randomness of the anisotropy and the annealed disorder introduced by the spin-1 model leads to a rich phase diagram with a variety of phase transitions and reentrant behavior. The results may be relevant to the study of the phase separation of He-3 - He-4 mixtures in porous media in the vicinity of the superfluid transition.

  12. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  13. Bead-rod-spring models in random flows.

    PubMed

    Plan, Emmanuel Lance Christopher Vi Medillo; Ali, Aamir; Vincenzi, Dario

    2016-08-01

    Bead-rod-spring models are the foundation of the kinetic theory of polymer solutions. We derive the diffusion equation for the probability density function of the configuration of a general bead-rod-spring model in short-correlated Gaussian random flows. Under isotropic conditions, we solve this equation analytically for the elastic rhombus model introduced by Curtiss, Bird, and Hassager [Adv. Chem. Phys. 35, 31 (1976)]. PMID:27627227

  14. Bead-rod-spring models in random flows

    NASA Astrophysics Data System (ADS)

    Plan, Emmanuel Lance Christopher Medillo, VI; Ali, Aamir; Vincenzi, Dario

    2016-08-01

    Bead-rod-spring models are the foundation of the kinetic theory of polymer solutions. We derive the diffusion equation for the probability density function of the configuration of a general bead-rod-spring model in short-correlated Gaussian random flows. Under isotropic conditions, we solve this equation analytically for the elastic rhombus model introduced by Curtiss, Bird, and Hassager [Adv. Chem. Phys. 35, 31 (1976)].

  15. Study of Double-Weighted Graph Model and Optimal Path Planning for Tourist Scenic Area Oriented Intelligent Tour Guide

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Long, Y.; Wi, X. L.

    2014-04-01

    When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.

  16. Random models of Menzerath-Altmann law in genomes.

    PubMed

    Baixeries, Jaume; Hernández-Fernández, Antoni; Ferrer-I-Cancho, Ramon

    2012-03-01

    Recently, a random breakage model has been proposed to explain the negative correlation between mean chromosome length and chromosome number that is found in many groups of species and is consistent with Menzerath-Altmann law, a statistical law that defines the dependency between the mean size of the whole and the number of parts in quantitative linguistics. Here, the central assumption of the model, namely that genome size is independent from chromosome number is reviewed. This assumption is shown to be unrealistic from the perspective of chromosome structure and the statistical analysis of real genomes. A general class of random models, including that random breakage model, is analyzed. For any model within this class, a power law with an exponent of -1 is predicted for the expectation of the mean chromosome size as a function of chromosome length, a functional dependency that is not supported by real genomes. The random breakage and variants keeping genome size and chromosome number independent raise no serious objection to the relevance of correlations consistent with Menzerath-Altmann law across taxonomic groups and the possibility of a connection between human language and genomes through that law.

  17. Initial Status in Growth Curve Modeling for Randomized Trials

    PubMed Central

    Chou, Chih-Ping; Chi, Felicia; Weisner, Constance; Pentz, MaryAnn; Hser, Yih-Ing

    2010-01-01

    The growth curve modeling (GCM) technique has been widely adopted in longitudinal studies to investigate progression over time. The simplest growth profile involves two growth factors, initial status (intercept) and growth trajectory (slope). Conventionally, all repeated measures of outcome are included as components of the growth profile, and the first measure is used to reflect the initial status. Selection of the initial status, however, can greatly influence study findings, especially for randomized trials. In this article, we propose an alternative GCM approach involving only post-intervention measures in the growth profile and treating the first wave after intervention as the initial status. We discuss and empirically illustrate how choices of initial status may influence study conclusions in addressing research questions in randomized trials using two longitudinal studies. Data from two randomized trials are used to illustrate that the alternative GCM approach proposed in this article offers better model fitting and more meaningful results. PMID:21572585

  18. Quantum walk coherences on a dynamical percolation graph

    NASA Astrophysics Data System (ADS)

    Elster, Fabian; Barkhofen, Sonja; Nitsche, Thomas; Novotný, Jaroslav; Gábris, Aurél; Jex, Igor; Silberhorn, Christine

    2015-08-01

    Coherent evolution governs the behaviour of all quantum systems, but in nature it is often subjected to influence of a classical environment. For analysing quantum transport phenomena quantum walks emerge as suitable model systems. In particular, quantum walks on percolation structures constitute an attractive platform for studying open system dynamics of random media. Here, we present an implementation of quantum walks differing from the previous experiments by achieving dynamical control of the underlying graph structure. We demonstrate the evolution of an optical time-multiplexed quantum walk over six double steps, revealing the intricate interplay between the internal and external degrees of freedom. The observation of clear non-Markovian signatures in the coin space testifies the high coherence of the implementation and the extraordinary degree of control of all system parameters. Our work is the proof-of-principle experiment of a quantum walk on a dynamical percolation graph, paving the way towards complex simulation of quantum transport in random media.

  19. GPD: a graph pattern diffusion kernel for accurate graph classification with applications in cheminformatics.

    PubMed

    Smalter, Aaron; Huan, Jun Luke; Jia, Yi; Lushington, Gerald

    2010-01-01

    Graph data mining is an active research area. Graphs are general modeling tools to organize information from heterogeneous sources and have been applied in many scientific, engineering, and business fields. With the fast accumulation of graph data, building highly accurate predictive models for graph data emerges as a new challenge that has not been fully explored in the data mining community. In this paper, we demonstrate a novel technique called graph pattern diffusion (GPD) kernel. Our idea is to leverage existing frequent pattern discovery methods and to explore the application of kernel classifier (e.g., support vector machine) in building highly accurate graph classification. In our method, we first identify all frequent patterns from a graph database. We then map subgraphs to graphs in the graph database and use a process we call "pattern diffusion" to label nodes in the graphs. Finally, we designed a graph alignment algorithm to compute the inner product of two graphs. We have tested our algorithm using a number of chemical structure data. The experimental results demonstrate that our method is significantly better than competing methods such as those kernel functions based on paths, cycles, and subgraphs.

  20. Constructing and sampling graphs with a given joint degree distribution.

    SciTech Connect

    Pinar, Ali; Stanton, Isabelle

    2010-09-01

    One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.

  1. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  2. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  3. A discrete impulsive model for random heating and Brownian motion

    NASA Astrophysics Data System (ADS)

    Ramshaw, John D.

    2010-01-01

    The energy of a mechanical system subjected to a random force with zero mean increases irreversibly and diverges with time in the absence of friction or dissipation. This random heating effect is usually encountered in phenomenological theories formulated in terms of stochastic differential equations, the epitome of which is the Langevin equation of Brownian motion. We discuss a simple discrete impulsive model that captures the essence of random heating and Brownian motion. The model may be regarded as a discrete analog of the Langevin equation, although it is developed ab initio. Its analysis requires only simple algebraic manipulations and elementary averaging concepts, but no stochastic differential equations (or even calculus). The irreversibility in the model is shown to be a consequence of a natural causal stochastic condition that is closely analogous to Boltzmann's molecular chaos hypothesis in the kinetic theory of gases. The model provides a simple introduction to several ostensibly more advanced topics, including random heating, molecular chaos, irreversibility, Brownian motion, the Langevin equation, and fluctuation-dissipation theorems.

  4. Assistance to neurosurgical planning: using a fuzzy spatial graph model of the brain for locating anatomical targets in MRI

    NASA Astrophysics Data System (ADS)

    Villéger, Alice; Ouchchane, Lemlih; Lemaire, Jean-Jacques; Boire, Jean-Yves

    2007-03-01

    Symptoms of neurodegenerative pathologies such as Parkinson's disease can be relieved through Deep Brain Stimulation. This neurosurgical technique relies on high precision positioning of electrodes in specific areas of the basal ganglia and the thalamus. These subcortical anatomical targets must be located at pre-operative stage, from a set of MRI acquired under stereotactic conditions. In order to assist surgical planning, we designed a semi-automated image analysis process for extracting anatomical areas of interest. Complementary information, provided by both patient's data and expert knowledge, is represented as fuzzy membership maps, which are then fused by means of suitable possibilistic operators in order to achieve the segmentation of targets. More specifically, theoretical prior knowledge on brain anatomy is modelled within a 'virtual atlas' organised as a spatial graph: a list of vertices linked by edges, where each vertex represents an anatomical structure of interest and contains relevant information such as tissue composition, whereas each edge represents a spatial relationship between two structures, such as their relative directions. The model is built using heterogeneous sources of information such as qualitative descriptions from the expert, or quantitative information from prelabelled images. For each patient, tissue membership maps are extracted from MR data through a classification step. Prior model and patient's data are then matched by using a research algorithm (or 'strategy') which simultaneously computes an estimation of the location of every structures. The method was tested on 10 clinical images, with promising results. Location and segmentation results were statistically assessed, opening perspectives for enhancements.

  5. A graph-dynamic model of the power law of practice and the problem-solving fan-effect.

    PubMed

    Shrager, J; Hogg, T; Huberman, B A

    1988-10-21

    Numerous human learning phenomena have been observed and captured by individual laws, but no unified theory of learning has succeeded in accounting for these observations. A theory and model are proposed that account for two of these phenomena: the power law of practice and the problem-solving fan-effect. The power law of practice states that the speed of performance of a task will improve as a power of the number of times that the task is performed. The power law resulting from two sorts of problem-solving changes, addition of operators to the problem-space graph and alterations in the decision procedure used to decide which operator to apply at a particular state, is empirically demonstrated. The model provides an analytic account for both of these sources of the power law. The model also predicts a problem-solving fan-effect, slowdown during practice caused by an increase in the difficulty of making useful decisions between possible paths, which is also found empirically. PMID:3175664

  6. Information Retrieval and Graph Analysis Approaches for Book Recommendation.

    PubMed

    Benkoussas, Chahinez; Bellot, Patrice

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.

  7. Quantum random oracle model for quantum digital signature

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  8. Random matrices as models for the statistics of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  9. The Abelian Sandpile Model on a Random Binary Tree

    NASA Astrophysics Data System (ADS)

    Redig, F.; Ruszel, W. M.; Saada, E.

    2012-06-01

    We study the abelian sandpile model on a random binary tree. Using a transfer matrix approach introduced by Dhar and Majumdar, we prove exponential decay of correlations, and in a small supercritical region (i.e., where the branching process survives with positive probability) exponential decay of avalanche sizes. This shows a phase transition phenomenon between exponential decay and power law decay of avalanche sizes. Our main technical tools are: (1) A recursion for the ratio between the numbers of weakly and strongly allowed configurations which is proved to have a well-defined stochastic solution; (2) quenched and annealed estimates of the eigenvalues of a product of n random transfer matrices.

  10. Utilizing Gaussian Markov random field properties of Bayesian animal models.

    PubMed

    Steinsland, Ingelin; Jensen, Henrik

    2010-09-01

    In this article, we demonstrate how Gaussian Markov random field properties give large computational benefits and new opportunities for the Bayesian animal model. We make inference by computing the posteriors for important quantitative genetic variables. For the single-trait animal model, a nonsampling-based approximation is presented. For the multitrait model, we set up a robust and fast Markov chain Monte Carlo algorithm. The proposed methodology was used to analyze quantitative genetic properties of morphological traits of a wild house sparrow population. Results for single- and multitrait models were compared.

  11. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  12. Cascading Failures in Bi-partite Graphs: Model for Systemic Risk Propagation

    PubMed Central

    Huang, Xuqing; Vodenska, Irena; Havlin, Shlomo; Stanley, H. Eugene

    2013-01-01

    As economic entities become increasingly interconnected, a shock in a financial network can provoke significant cascading failures throughout the system. To study the systemic risk of financial systems, we create a bi-partite banking network model composed of banks and bank assets and propose a cascading failure model to describe the risk propagation process during crises. We empirically test the model with 2007 US commercial banks balance sheet data and compare the model prediction of the failed banks with the real failed banks after 2007. We find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation. The results suggest that this model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the failure of over 350 US commercial banks during 2008–2011. PMID:23386974

  13. Numerical and Analytic Studies of Random-Walk Models.

    NASA Astrophysics Data System (ADS)

    Li, Bin

    We begin by recapitulating the universality approach to problems associated with critical systems, and discussing the role that random-walk models play in the study of phase transitions and critical phenomena. As our first numerical simulation project, we perform high-precision Monte Carlo calculations for the exponents of the intersection probability of pairs and triplets of ordinary random walks in 2 dimensions, in order to test the conformal-invariance theory predictions. Our numerical results strongly support the theory. Our second numerical project aims to test the hyperscaling relation dnu = 2 Delta_4-gamma for self-avoiding walks in 2 and 3 dimensions. We apply the pivot method to generate pairs of self-avoiding walks, and then for each pair, using the Karp-Luby algorithm, perform an inner -loop Monte Carlo calculation of the number of different translates of one walk that makes at least one intersection with the other. Applying a least-squares fit to estimate the exponents, we have obtained strong numerical evidence that the hyperscaling relation is true in 3 dimensions. Our great amount of data for walks of unprecedented length(up to 80000 steps), yield a updated value for the end-to-end distance and radius of gyration exponent nu = 0.588 +/- 0.001 (95% confidence limit), which comes out in good agreement with the renormalization -group prediction. In an analytic study of random-walk models, we introduce multi-colored random-walk models and generalize the Symanzik and B.F.S. random-walk representations to the multi-colored case. We prove that the zero-component lambdavarphi^2psi^2 theory can be represented by a two-color mutually -repelling random-walk model, and it becomes the mutually -avoiding walk model in the limit lambda to infty. However, our main concern and major break-through lies in the study of the two-point correlation function for the lambda varphi^2psi^2 theory with N > 0 components. By representing it as a two-color random-walk expansion

  14. A monoecious and diploid Moran model of random mating.

    PubMed

    Hössjer, Ola; Tyvand, Peder A

    2016-04-01

    An exact Markov chain is developed for a Moran model of random mating for monoecious diploid individuals with a given probability of self-fertilization. The model captures the dynamics of genetic variation at a biallelic locus. We compare the model with the corresponding diploid Wright-Fisher (WF) model. We also develop a novel diffusion approximation of both models, where the genotype frequency distribution dynamics is described by two partial differential equations, on different time scales. The first equation captures the more slowly varying allele frequencies, and it is the same for the Moran and WF models. The other equation captures departures of the fraction of heterozygous genotypes from a large population equilibrium curve that equals Hardy-Weinberg proportions in the absence of selfing. It is the distribution of a continuous time Ornstein-Uhlenbeck process for the Moran model and a discrete time autoregressive process for the WF model. One application of our results is to capture dynamics of the degree of non-random mating of both models, in terms of the fixation index fIS. Although fIS has a stable fixed point that only depends on the degree of selfing, the normally distributed oscillations around this fixed point are stochastically larger for the Moran than for the WF model. PMID:26807805

  15. A generalized model via random walks for information filtering

    NASA Astrophysics Data System (ADS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  16. Image synthesis with graph cuts: a fast model proposal mechanism in probabilistic inversion

    NASA Astrophysics Data System (ADS)

    Zahner, Tobias; Lochbühler, Tobias; Mariethoz, Grégoire; Linde, Niklas

    2016-02-01

    Geophysical inversion should ideally produce geologically realistic subsurface models that explain the available data. Multiple-point statistics is a geostatistical approach to construct subsurface models that are consistent with site-specific data, but also display the same type of patterns as those found in a training image. The training image can be seen as a conceptual model of the subsurface and is used as a non-parametric model of spatial variability. Inversion based on multiple-point statistics is challenging due to high nonlinearity and time-consuming geostatistical resimulation steps that are needed to create new model proposals. We propose an entirely new model proposal mechanism for geophysical inversion that is inspired by texture synthesis in computer vision. Instead of resimulating pixels based on higher-order patterns in the training image, we identify a suitable patch of the training image that replace a corresponding patch in the current model without breaking the patterns found in the training image, that is, remaining consistent with the given prior. We consider three cross-hole ground-penetrating radar examples in which the new model proposal mechanism is employed within an extended Metropolis Markov chain Monte Carlo (MCMC) inversion. The model proposal step is about 40 times faster than state-of-the-art multiple-point statistics resimulation techniques, the number of necessary MCMC steps is lower and the quality of the final model realizations is of similar quality. The model proposal mechanism is presently limited to 2-D fields, but the method is general and can be applied to a wide range of subsurface settings and geophysical data types.

  17. Computing Information Value from RDF Graph Properties

    SciTech Connect

    al-Saffar, Sinan; Heileman, Gregory

    2010-11-08

    Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We compute information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.

  18. Inference of random walk models to describe leukocyte migration

    NASA Astrophysics Data System (ADS)

    Jones, Phoebe J. M.; Sim, Aaron; Taylor, Harriet B.; Bugeon, Laurence; Dallman, Magaret J.; Pereira, Bernard; Stumpf, Michael P. H.; Liepe, Juliane

    2015-12-01

    While the majority of cells in an organism are static and remain relatively immobile in their tissue, migrating cells occur commonly during developmental processes and are crucial for a functioning immune response. The mode of migration has been described in terms of various types of random walks. To understand the details of the migratory behaviour we rely on mathematical models and their calibration to experimental data. Here we propose an approximate Bayesian inference scheme to calibrate a class of random walk models characterized by a specific, parametric particle re-orientation mechanism to observed trajectory data. We elaborate the concept of transition matrices (TMs) to detect random walk patterns and determine a statistic to quantify these TM to make them applicable for inference schemes. We apply the developed pipeline to in vivo trajectory data of macrophages and neutrophils, extracted from zebrafish that had undergone tail transection. We find that macrophage and neutrophils exhibit very distinct biased persistent random walk patterns, where the strengths of the persistence and bias are spatio-temporally regulated. Furthermore, the movement of macrophages is far less persistent than that of neutrophils in response to wounding.

  19. Experiments on parallel graph coloring and applications

    SciTech Connect

    Lewandowski, G.; Condon, A.

    1994-12-31

    The graph coloring problem is an NP-Complete problem with a wide array of applications, such as course scheduling, exam scheduling, register allocation, and parallelizing solutions for sparse systems of linear equations. Much theoretical effort has been put into designing heuristics that perform well on randomly generated graphs. The best sequential heuristics require large amounts of time and tuning of various parameters in the heuristics. We have used parallelism to combine exhaustive search with successful heuristic strategies to create a new heuristic, Hybrid, which does well on a wide variety of graphs, without any tuning of parameters. We have also gathered real application data and tested several heuristics on this data. Our study of real data points out some flaws in studying only random graphs and also suggests interesting new problems for study.

  20. Generalized random sign and alert delay models for imperfect maintenance.

    PubMed

    Dijoux, Yann; Gaudoin, Olivier

    2014-04-01

    This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.

  1. Generalized random sign and alert delay models for imperfect maintenance.

    PubMed

    Dijoux, Yann; Gaudoin, Olivier

    2014-04-01

    This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented. PMID:23460491

  2. Blood Clot Simulation Model by Using the Bond-Graph Technique

    PubMed Central

    Martinez, M. Luisa

    2013-01-01

    The World Health Organization estimates that 17 million people die of cardiovascular disease, particularly heart attacks and strokes, every year. Most strokes are caused by a blood clot that occludes an artery in the cerebral circulation and the process concerning the removal of this obstruction involves catheterisation. The fundamental object of the presented study consists in determining and optimizing the necessary simulation model corresponding with the blood clot zone to be implemented jointly with other Mechanical Thrombectomy Device simulation models, which have become more widely used during the last decade. To do so, a multidomain technique is used to better explain the different aspects of the attachment to the artery wall and between the existing platelets, it being possible to obtain the mathematical equations that define the full model. For a better understanding, a consecutive approximation to the definitive model will be presented, analyzing the different problems found during the study. The final presented model considers an elastic characterization of the blood clot composition and the possibility of obtaining a consecutive detachment process from the artery wall. In conclusion, the presented model contains the necessary behaviour laws to be implemented in future blood clot simulation models. PMID:24453867

  3. Blood clot simulation model by using the Bond-Graph technique.

    PubMed

    Romero, Gregorio; Martinez, M Luisa; Maroto, Joaquin; Felez, Jesus

    2013-01-01

    The World Health Organization estimates that 17 million people die of cardiovascular disease, particularly heart attacks and strokes, every year. Most strokes are caused by a blood clot that occludes an artery in the cerebral circulation and the process concerning the removal of this obstruction involves catheterisation. The fundamental object of the presented study consists in determining and optimizing the necessary simulation model corresponding with the blood clot zone to be implemented jointly with other Mechanical Thrombectomy Device simulation models, which have become more widely used during the last decade. To do so, a multidomain technique is used to better explain the different aspects of the attachment to the artery wall and between the existing platelets, it being possible to obtain the mathematical equations that define the full model. For a better understanding, a consecutive approximation to the definitive model will be presented, analyzing the different problems found during the study. The final presented model considers an elastic characterization of the blood clot composition and the possibility of obtaining a consecutive detachment process from the artery wall. In conclusion, the presented model contains the necessary behaviour laws to be implemented in future blood clot simulation models. PMID:24453867

  4. Many-body localization in the quantum random energy model

    NASA Astrophysics Data System (ADS)

    Laumann, Chris; Pal, Arijeet

    2014-03-01

    The quantum random energy model is a canonical toy model for a quantum spin glass with a well known phase diagram. We show that the model exhibits a many-body localization-delocalization transition at finite energy density which significantly alters the interpretation of the statistical ``frozen'' phase at lower temperature in isolated quantum systems. The transition manifests in many-body level statistics as well as the long time dynamics of on-site observables. CRL thanks the Perimeter Institute for hospitality and support.

  5. Contact graphs of disk packings as a model of spatial planar networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongzhi; Guan, Jihong; Ding, Bailu; Chen, Lichao; Zhou, Shuigeng

    2009-08-01

    Spatially constrained planar networks are frequently encountered in real-life systems. In this paper, based on a space-filling disk packing we propose a minimal model for spatial maximal planar networks, which is similar to but different from the model for Apollonian networks (Andrade et al 2005 Phys. Rev. Lett. 94 018702). We present an exhaustive analysis of various properties of our model, and obtain the analytic solutions for most of the features, including degree distribution, clustering coefficient, average path length and degree correlations. The model recovers some striking generic characteristics observed in most real networks. To address the robustness of the relevant network properties, we compare the structural features between the investigated network and the Apollonian networks. We show that topological properties of the two networks are encoded in the way of disk packing. We argue that spatial constraints of nodes are relevant to the structure of the networks.

  6. Simple graph-theoretical model for flavonoid binding to P-glycoprotein.

    PubMed

    Miličević, Ante; Raos, Nenad

    2016-03-01

    Three sets of flavonoid derivatives (N=32, 40, and 74) and logarithms of their dissociation constants (log Kd) that describe flavonoid affinity toward P-glycoprotein were modelled using six connectivity indices. The best results were obtained with the zero-order valence molecular connectivity index (0χv) for all three sets. Standard errors of the calibration models were around 0.3, and of the constants from the test sets even a little lower, 0.22 and 0.24. Despite using only one descriptor, our model proved better in internal (cross-validation) and especially in external (test set) statistics than much more demanding methods used in previous 3D QSAR modelling.

  7. Random unitary evolution model of quantum Darwinism with pure decoherence

    NASA Astrophysics Data System (ADS)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  8. Random unitary evolution model of quantum Darwinism with pure decoherence

    NASA Astrophysics Data System (ADS)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S- E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  9. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  10. Optimizing spread dynamics on graphs by message passing

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.

    2013-09-01

    Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).

  11. Random Resistor Network Model of Minimal Conductivity in Graphene

    NASA Astrophysics Data System (ADS)

    Cheianov, Vadim V.; Fal'Ko, Vladimir I.; Altshuler, Boris L.; Aleiner, Igor L.

    2007-10-01

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  12. Identification of dynamical biological systems based on random effects models.

    PubMed

    Batista, Levy; Bastogne, Thierry; Djermoune, El-Hadi

    2015-01-01

    System identification is a data-driven modeling approach more and more used in biology and biomedicine. In this application context, each assay is always repeated to estimate the response variability. The inference of the modeling conclusions to the whole population requires to account for the inter-individual variability within the modeling procedure. One solution consists in using random effects models but up to now no similar approach exists in the field of dynamical system identification. In this article, we propose a new solution based on an ARX (Auto Regressive model with eXternal inputs) structure using the EM (Expectation-Maximisation) algorithm for the estimation of the model parameters. Simulations show the relevance of this solution compared with a classical procedure of system identification repeated for each subject. PMID:26736981

  13. GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems

    SciTech Connect

    Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen; Schwan, Karsten

    2015-09-30

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and the device.

  14. GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems

    SciTech Connect

    Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil; Schwan, Karsten

    2015-11-15

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and device.

  15. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  16. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control. PMID:24376691

  17. Social Aggregation in Pea Aphids: Experiment and Random Walk Modeling

    PubMed Central

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J.; Topaz, Chad M.

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control. PMID:24376691

  18. Correlated energy landscape model for finite, random heteropolymers

    NASA Astrophysics Data System (ADS)

    Plotkin, Steven S.; Wang, Jin; Wolynes, Peter G.

    1996-06-01

    In this paper, we study the role of correlations in the energy landscape of a finite random heteropolymer by developing the mapping onto the generalized random energy model (GREM) proposed by Derrida and Gardner [J. Phys. C 19, 2253 (1986)] in the context of spin glasses. After obtaining the joint distribution for energies of pairs of configurations, and by calculating the entropy of the polymer subject to weak and strong topological constraints, the model yields thermodynamic quantities such as ground-state energy, entropy per thermodynamic basin, and glass transition temperature as functions of the polymer length and packing density. These are found to be very close to the uncorrelated landscape or random energy model (REM) estimates. A tricritical point is obtained where behavior of the order parameter q changes from first order with a discrete jump at the transition, to second-order continuous. While the thermodynamic quantities obtained from the free energy are close to the REM values, the Levinthal entropy describing the number of basins which must be searched at the glass transition is significantly modified by correlations.

  19. Nonlinear system modeling with random matrices: echo state networks revisited.

    PubMed

    Zhang, Bai; Miller, David J; Wang, Yue

    2012-01-01

    Echo state networks (ESNs) are a novel form of recurrent neural networks (RNNs) that provide an efficient and powerful computational model approximating nonlinear dynamical systems. A unique feature of an ESN is that a large number of neurons (the "reservoir") are used, whose synaptic connections are generated randomly, with only the connections from the reservoir to the output modified by learning. Why a large randomly generated fixed RNN gives such excellent performance in approximating nonlinear systems is still not well understood. In this brief, we apply random matrix theory to examine the properties of random reservoirs in ESNs under different topologies (sparse or fully connected) and connection weights (Bernoulli or Gaussian). We quantify the asymptotic gap between the scaling factor bounds for the necessary and sufficient conditions previously proposed for the echo state property. We then show that the state transition mapping is contractive with high probability when only the necessary condition is satisfied, which corroborates and thus analytically explains the observation that in practice one obtains echo states when the spectral radius of the reservoir weight matrix is smaller than 1.

  20. Graphing Inequalities, Connecting Meaning

    ERIC Educational Resources Information Center

    Switzer, J. Matt

    2014-01-01

    Students often have difficulty with graphing inequalities (see Filloy, Rojano, and Rubio 2002; Drijvers 2002), and J. Matt Switzer's students were no exception. Although students can produce graphs for simple inequalities, they often struggle when the format of the inequality is unfamiliar. Even when producing a correct graph of an…

  1. Graph-Plotting Routine

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1987-01-01

    Plotter routine for IBM PC (AKPLOT) designed for engineers and scientists who use graphs as integral parts of their documentation. Allows user to generate graph and edit its appearance on cathode-ray tube. Graph may undergo many interactive alterations before finally dumped from screen to be plotted by printer. Written in BASIC.

  2. Graphing Important People

    ERIC Educational Resources Information Center

    Reading Teacher, 2012

    2012-01-01

    The "Toolbox" column features content adapted from ReadWriteThink.org lesson plans and provides practical tools for classroom teachers. This issue's column features a lesson plan adapted from "Graphing Plot and Character in a Novel" by Lisa Storm Fink and "Bio-graph: Graphing Life Events" by Susan Spangler. Students retell biographic events…

  3. Box graphs and resolutions I

    NASA Astrophysics Data System (ADS)

    Braun, Andreas P.; Schäfer-Nameki, Sakura

    2016-04-01

    Box graphs succinctly and comprehensively characterize singular fibers of elliptic fibrations in codimension two and three, as well as flop transitions connecting these, in terms of representation theoretic data. We develop a framework that provides a systematic map between a box graph and a crepant algebraic resolution of the singular elliptic fibration, thus allowing an explicit construction of the fibers from a singular Weierstrass or Tate model. The key tool is what we call a fiber face diagram, which shows the relevant information of a (partial) toric triangulation and allows the inclusion of more general algebraic blowups. We shown that each such diagram defines a sequence of weighted algebraic blowups, thus providing a realization of the fiber defined by the box graph in terms of an explicit resolution. We show this correspondence explicitly for the case of SU (5) by providing a map between box graphs and fiber faces, and thereby a sequence of algebraic resolutions of the Tate model, which realizes each of the box graphs.

  4. Mining Discriminative Patterns from Graph Data with Multiple Labels and Its Application to Quantitative Structure-Activity Relationship (QSAR) Models.

    PubMed

    Shao, Zheng; Hirayama, Yuya; Yamanishi, Yoshihiro; Saigo, Hiroto

    2015-12-28

    Graph data are becoming increasingly common in machine learning and data mining, and its application field pervades to bioinformatics and cheminformatics. Accordingly, as a method to extract patterns from graph data, graph mining recently has been studied and developed rapidly. Since the number of patterns in graph data is huge, a central issue is how to efficiently collect informative patterns suitable for subsequent tasks such as classification or regression. In this paper, we consider mining discriminative subgraphs from graph data with multiple labels. The resulting task has important applications in cheminformatics, such as finding common functional groups that trigger multiple drug side effects, or identifying ligand functional groups that hit multiple targets. In computational experiments, we first verify the effectiveness of the proposed approach in synthetic data, then we apply it to drug adverse effect prediction problem. In the latter dataset, we compared the proposed method with L1-norm logistic regression in combination with the PubChem/Open Babel fingerprint, in that the proposed method showed superior performance with a much smaller number of subgraph patterns. Software is available from https://github.com/axot/GLP.

  5. Reconstructing patient-specific cardiac models from contours via Delaunay triangulation and graph-cuts.

    PubMed

    Wan, Min; Lim, Calvin; Zhang, Junmei; Su, Yi; Yeo, Si Yong; Wang, Desheng; Tan, Ru San; Zhong, Liang

    2013-01-01

    This study proposes a novel method to reconstruct the left cardiac structure from contours. Given the contours representing left ventricle (LV), left atrium (LA), and aorta (AO), re-orientation, contour matching, extrapolation, and interpolation are performed sequentially. The processed data are then reconstructed via a variational method. The weighted minimal surface model is revised to handle the multi-phase cases, which happens at the LV-LA-AO junction. A Delaunay-based tetrahedral mesh is generated to discretize the domain while the max-flow/min-cut algorithm is utilized as the minimization tool. The reconstructed model including LV, LA, and AO structure is extracted from the mesh and post-processed further. Numerical examples show the robustness and effectiveness of the proposed method.

  6. Vortices and superfields on a graph

    SciTech Connect

    Kan, Nahomi; Kobayashi, Koichiro; Shiraishi, Kiyoshi

    2009-08-15

    We extend the dimensional deconstruction by utilizing the knowledge of graph theory. In the dimensional deconstruction, one uses the moose diagram to exhibit the structure of the 'theory space'. We generalize the moose diagram to a general graph with oriented edges. In the present paper, we consider only the U(1) gauge symmetry. We also introduce supersymmetry into our model by use of superfields. We suppose that vector superfields reside at the vertices and chiral superfields at the edges of a given graph. Then we can consider multivector, multi-Higgs models. In our model, [U(1)]{sup p} (where p is the number of vertices) is broken to a single U(1). Therefore, for specific graphs, we get vortexlike classical solutions in our model. We show some examples of the graphs admitting the vortex solutions of simple structure as the Bogomolnyi solution.

  7. Vortices and superfields on a graph

    NASA Astrophysics Data System (ADS)

    Kan, Nahomi; Kobayashi, Koichiro; Shiraishi, Kiyoshi

    2009-08-01

    We extend the dimensional deconstruction by utilizing the knowledge of graph theory. In the dimensional deconstruction, one uses the moose diagram to exhibit the structure of the “theory space.” We generalize the moose diagram to a general graph with oriented edges. In the present paper, we consider only the U(1) gauge symmetry. We also introduce supersymmetry into our model by use of superfields. We suppose that vector superfields reside at the vertices and chiral superfields at the edges of a given graph. Then we can consider multivector, multi-Higgs models. In our model, [U(1)]p (where p is the number of vertices) is broken to a single U(1). Therefore, for specific graphs, we get vortexlike classical solutions in our model. We show some examples of the graphs admitting the vortex solutions of simple structure as the Bogomolnyi solution.

  8. A fast Monte Carlo algorithm for source localization on graphs

    NASA Astrophysics Data System (ADS)

    Agaskar, Ameya; Lu, Yue M.

    2013-09-01

    Epidemic models on networks have long been studied by biologists and social sciences to determine the steady state levels of an infection on a network. Recently, however, several authors have begun considering the more difficult problem of estimating the source of an infection given information about its behavior some time after the initial infection. In this paper, we describe a technique to estimate the source of an infection on a general graph based on observations from a small set of observers during a fixed time window at some unknown time after the initial infection. We describe an alternate representation for the susceptible-infected (SI) infection model based on geodesic distances on a randomly-weighted version of the graph; this representation allows us to exploit fast algorithms to compute geodesic distances to estimate the marginal distributions for each observer and compute a pseudo-likelihood function that is maximized to find the source.

  9. From time series to complex networks: the visibility graph.

    PubMed

    Lacasa, Lucas; Luque, Bartolo; Ballesteros, Fernando; Luque, Jordi; Nuño, Juan Carlos

    2008-04-01

    In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view.

  10. On a Stochastic Failure Model under Random Shocks

    NASA Astrophysics Data System (ADS)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  11. Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs

    PubMed Central

    SHOJAIE, ALI; MICHAILIDIS, GEORGE

    2010-01-01

    Summary Directed acyclic graphs are commonly used to represent causal relationships among random variables in graphical models. Applications of these models arise in the study of physical and biological systems where directed edges between nodes represent the influence of components of the system on each other. Estimation of directed graphs from observational data is computationally NP-hard. In addition, directed graphs with the same structure may be indistinguishable based on observations alone. When the nodes exhibit a natural ordering, the problem of estimating directed graphs reduces to the problem of estimating the structure of the network. In this paper, we propose an efficient penalized likelihood method for estimation of the adjacency matrix of directed acyclic graphs, when variables inherit a natural ordering. We study variable selection consistency of lasso and adaptive lasso penalties in high-dimensional sparse settings, and propose an error-based choice for selecting the tuning parameter. We show that although the lasso is only variable selection consistent under stringent conditions, the adaptive lasso can consistently estimate the true graph under the usual regularity assumptions. PMID:22434937

  12. Integrating Sediment Connectivity into Water Resources Management Trough a Graph Theoretic, Stochastic Modeling Framework.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Castelletti, A.; Bizzi, S.

    2014-12-01

    Understanding sediment transport processes at the river basin scale, their temporal spectra and spatial patterns is key to identify and minimize morphologic risks associated to channel adjustments processes. This work contributes a stochastic framework for modeling bed-load connectivity based on recent advances in the field (e.g., Bizzi & Lerner, 2013; Czubas & Foufoulas-Georgiu, 2014). It presents river managers with novel indicators from reach scale vulnerability to channel adjustment in large river networks with sparse hydrologic and sediment observations. The framework comprises three steps. First, based on a distributed hydrological model and remotely sensed information, the framework identifies a representative grain size class for each reach. Second, sediment residence time distributions are calculated for each reach in a Monte-Carlo approach applying standard sediment transport equations driven by local hydraulic conditions. Third, a network analysis defines the up- and downstream connectivity for various travel times resulting in characteristic up/downstream connectivity signatures for each reach. Channel vulnerability indicators quantify the imbalance between up/downstream connectivity for each travel time domain, representing process dependent latency of morphologic response. Last, based on the stochastic core of the model, a sensitivity analysis identifies drivers of change and major sources of uncertainty in order to target key detrimental processes and to guide effective gathering of additional data. The application, limitation and integration into a decision analytic framework is demonstrated for a major part of the Red River Basin in Northern Vietnam (179.000 km2). Here, a plethora of anthropic alterations ranging from large reservoir construction to land-use changes results in major downstream deterioration and calls for deriving concerted sediment management strategies to mitigate current and limit future morphologic alterations.

  13. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    NASA Astrophysics Data System (ADS)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-01

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of field and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.

  14. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  15. Graphing with "LogoWriter."

    ERIC Educational Resources Information Center

    Yoder, Sharon K.

    This book discusses four kinds of graphs that are taught in mathematics at the middle school level: pictographs, bar graphs, line graphs, and circle graphs. The chapters on each of these types of graphs contain information such as starting, scaling, drawing, labeling, and finishing the graphs using "LogoWriter." The final chapter of the book…

  16. Contact Graph Routing

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic

  17. A stochastic model of randomly accelerated walkers for human mobility

    NASA Astrophysics Data System (ADS)

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-08-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility.

  18. A stochastic model of randomly accelerated walkers for human mobility.

    PubMed

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-01-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984

  19. A stochastic model of randomly accelerated walkers for human mobility

    PubMed Central

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-01-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984

  20. Occupation time statistics of the random acceleration model

    NASA Astrophysics Data System (ADS)

    Joël Ouandji Boutcheng, Hermann; Bouetou Bouetou, Thomas; Burkhardt, Theodore W.; Rosso, Alberto; Zoia, Andrea; Timoleon Crepin, Kofane

    2016-05-01

    The random acceleration model is one of the simplest non-Markovian stochastic systems and has been widely studied in connection with applications in physics and mathematics. However, the occupation time and related properties are non-trivial and not yet completely understood. In this paper we consider the occupation time T + of the one-dimensional random acceleration model on the positive half-axis. We calculate the first two moments of T + analytically and also study the statistics of T + with Monte Carlo simulations. One goal of our work was to ascertain whether the occupation time T + and the time T m at which the maximum of the process is attained are statistically equivalent. For regular Brownian motion the distributions of T + and T m coincide and are given by Lévy’s arcsine law. We show that for randomly accelerated motion the distributions of T + and T m are quite similar but not identical. This conclusion follows from the exact results for the moments of the distributions and is also consistent with our Monte Carlo simulations.

  1. Interval process model and non-random vibration analysis

    NASA Astrophysics Data System (ADS)

    Jiang, C.; Ni, B. Y.; Liu, N. Y.; Han, X.; Liu, J.

    2016-07-01

    This paper develops an interval process model for time-varying or dynamic uncertainty analysis when information of the uncertain parameter is inadequate. By using the interval process model to describe a time-varying uncertain parameter, only its upper and lower bounds are required at each time point rather than its precise probability distribution, which is quite different from the traditional stochastic process model. A correlation function is defined for quantification of correlation between the uncertain-but-bounded variables at different times, and a matrix-decomposition-based method is presented to transform the original dependent interval process into an independent one for convenience of subsequent uncertainty analysis. More importantly, based on the interval process model, a non-random vibration analysis method is proposed for response computation of structures subjected to time-varying uncertain external excitations or loads. The structural dynamic responses thus can be derived in the form of upper and lower bounds, providing an important guidance for practical safety analysis and reliability design of structures. Finally, two numerical examples and one engineering application are investigated to demonstrate the feasibility of the interval process model and corresponding non-random vibration analysis method.

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  4. Calibration of stormwater quality regression models: a random process?

    PubMed

    Dembélé, A; Bertrand-Krajewski, J-L; Barillon, B

    2010-01-01

    Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.

  5. Methods of visualizing graphs

    SciTech Connect

    Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.

    2008-12-23

    Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.

  6. Experimental quantum annealing: case study involving the graph isomorphism problem

    PubMed Central

    Zick, Kenneth M.; Shehab, Omar; French, Matthew

    2015-01-01

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973

  7. Thermodynamical Limit for Correlated Gaussian Random Energy Models

    NASA Astrophysics Data System (ADS)

    Contucci, P.; Esposti, M. Degli; Giardinà, C.; Graffi, S.

    Let {EΣ(N)}ΣΣN be a family of |ΣN|=2N centered unit Gaussian random variables defined by the covariance matrix CN of elements cN(Σ,τ):=Av(EΣ(N)Eτ(N)) and the corresponding random Hamiltonian. Then the quenched thermodynamical limit exists if, for every decomposition N=N1+N2, and all pairs (Σ,τ)ΣN×ΣN: where πk(Σ),k=1,2 are the projections of ΣΣN into ΣNk. The condition is explicitly verified for the Sherrington-Kirkpatrick, the even p-spin, the Derrida REM and the Derrida-Gardner GREM models.

  8. A hidden Markov random field model for genome-wide association studies.

    PubMed

    Li, Hongzhe; Wei, Zhi; Maris, John

    2010-01-01

    Genome-wide association studies (GWAS) are increasingly utilized for identifying novel susceptible genetic variants for complex traits, but there is little consensus on analysis methods for such data. Most commonly used methods include single single nucleotide polymorphism (SNP) analysis or haplotype analysis with Bonferroni correction for multiple comparisons. Since the SNPs in typical GWAS are often in linkage disequilibrium (LD), at least locally, Bonferroni correction of multiple comparisons often leads to conservative error control and therefore lower statistical power. In this paper, we propose a hidden Markov random field model (HMRF) for GWAS analysis based on a weighted LD graph built from the prior LD information among the SNPs and an efficient iterative conditional mode algorithm for estimating the model parameters. This model effectively utilizes the LD information in calculating the posterior probability that an SNP is associated with the disease. These posterior probabilities can then be used to define a false discovery controlling procedure in order to select the disease-associated SNPs. Simulation studies demonstrated the potential gain in power over single SNP analysis. The proposed method is especially effective in identifying SNPs with borderline significance at the single-marker level that nonetheless are in high LD with significant SNPs. In addition, by simultaneously considering the SNPs in LD, the proposed method can also help to reduce the number of false identifications of disease-associated SNPs. We demonstrate the application of the proposed HMRF model using data from a case-control GWAS of neuroblastoma and identify 1 new SNP that is potentially associated with neuroblastoma.

  9. Supplantation of Mental Operations on Graphs

    ERIC Educational Resources Information Center

    Vogel, Markus; Girwidz, Raimund; Engel, Joachim

    2007-01-01

    Research findings show the difficulties younger students have in working with graphs. Higher mental operations are necessary for a skilled interpretation of abstract representations. We suggest connecting a concrete representation of the modeled problem with the related graph. The idea is to illustrate essential mental operations externally. This…

  10. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  11. Connectivity properties of the random-cluster model

    NASA Astrophysics Data System (ADS)

    Weigel, Martin; Metin Elci, Eren; Fytas, Nikolaos G.

    2016-02-01

    We investigate the connectivity properties of the random-cluster model mediated by bridge bonds that, if removed, lead to the generation of new connected components. We study numerically the density of bridges and the fragmentation kernel, i.e., the relative sizes of the generated fragments, and find that these quantities follow a scaling description. The corresponding scaling exponents are related to well known equilibrium critical exponents of the model. Using the Russo-Margulis formalism, we derive an exact relation between the expected density of bridges and the number of active edges. The same approach allows us to study the fluctuations in the numbers of bridges, thereby uncovering a new singularity in the random- cluster model as q < 4 cos2 (π/√3) in two dimensions. For numerical simulations of the model directly in the language of individual bonds, known as Sweeny's algorithm, the prevalence of bridges and the scaling of the sizes of clusters connected by bridges and candidate-bridges play a pivotal role. We discuss several different implementations of the necessary connectivity algorithms and assess their relative performance.

  12. Modeling Temporal Variation in Social Network: An Evolutionary Web Graph Approach

    NASA Astrophysics Data System (ADS)

    Mitra, Susanta; Bagchi, Aditya

    A social network is a social structure between actors (individuals, organization or other social entities) and indicates the ways in which they are connected through various social relationships like friendships, kinships, professional, academic etc. Usually, a social network represents a social community, like a club and its members or a city and its citizens etc. or a research group communicating over Internet. In seventies Leinhardt [1] first proposed the idea of representing a social community by a digraph. Later, this idea became popular among other research workers like, network designers, web-service application developers and e-learning modelers. It gave rise to a rapid proliferation of research work in the area of social network analysis. Some of the notable structural properties of a social network are connectedness between actors, reachability between a source and a target actor, reciprocity or pair-wise connection between actors with bi-directional links, centrality of actors or the important actors having high degree or more connections and finally the division of actors into sub-structures or cliques or strongly-connected components. The cycles present in a social network may even be nested [2, 3]. The formal definition of these structural properties will be provided in Sect. 8.2.1. The division of actors into cliques or sub-groups can be a very important factor for understanding a social structure, particularly the degree of cohesiveness in a community. The number, size, and connections among the sub-groups in a network are useful in understanding how the network, as a whole, is likely to behave.

  13. Graph theory for analyzing pair-wise data: application to geophysical model parameters estimated from interferometric synthetic aperture radar data at Okmok volcano, Alaska

    NASA Astrophysics Data System (ADS)

    Reinisch, Elena C.; Cardiff, Michael; Feigl, Kurt L.

    2016-07-01

    Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is (6.2 ± 0.6) × 10^6~m^3/year . Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.

  14. Critical behavior of the Ising model on random fractals.

    PubMed

    Monceau, Pascal

    2011-11-01

    We study the critical behavior of the Ising model in the case of quenched disorder constrained by fractality on random Sierpinski fractals with a Hausdorff dimension d(f) is approximately equal to 1.8928. This is a first attempt to study a situation between the borderline cases of deterministic self-similarity and quenched randomness. Intensive Monte Carlo simulations were carried out. Scaling corrections are much weaker than in the deterministic cases, so that our results enable us to ensure that finite-size scaling holds, and that the critical behavior is described by a new universality class. The hyperscaling relation is compatible with an effective dimension equal to the Hausdorff one; moreover the two eigenvalues exponents of the renormalization flows are shown to be different from the ones calculated from ε expansions, and from the ones obtained for fourfold symmetric deterministic fractals. Although the space dimensionality is not integer, lack of self-averaging properties exhibits some features very close to the ones of a random fixed point associated with a relevant disorder.

  15. Random element method for numerical modeling of diffusional processes

    NASA Technical Reports Server (NTRS)

    Ghoniem, A. F.; Oppenheim, A. K.

    1982-01-01

    The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.

  16. Graph theory enables drug repurposing--how a mathematical model can drive the discovery of hidden mechanisms of action.

    PubMed

    Gramatica, Ruggero; Di Matteo, T; Giorgetti, Stefano; Barbiani, Massimo; Bevec, Dorian; Aste, Tomaso

    2014-01-01

    We introduce a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. We propose a measure for the likeliness of these paths based on a stochastic process on the graph. This measure depends on the abundance of indirect paths between a peptide and a disease, rather than solely on the strength of the shortest path connecting them. We provide real-world examples, showing how the method successfully retrieves known pathophysiological Mode of Action and finds new ones by meaningfully selecting and aggregating contributions from known bio-molecular interactions. Applications of this methodology are presented, and prove the efficacy of the method for selecting drugs as treatment options for rare diseases.

  17. Graph Theory Enables Drug Repurposing – How a Mathematical Model Can Drive the Discovery of Hidden Mechanisms of Action

    PubMed Central

    Gramatica, Ruggero; Di Matteo, T.; Giorgetti, Stefano; Barbiani, Massimo; Bevec, Dorian; Aste, Tomaso

    2014-01-01

    We introduce a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. We propose a measure for the likeliness of these paths based on a stochastic process on the graph. This measure depends on the abundance of indirect paths between a peptide and a disease, rather than solely on the strength of the shortest path connecting them. We provide real-world examples, showing how the method successfully retrieves known pathophysiological Mode of Action and finds new ones by meaningfully selecting and aggregating contributions from known bio-molecular interactions. Applications of this methodology are presented, and prove the efficacy of the method for selecting drugs as treatment options for rare diseases. PMID:24416311

  18. Modeling the Relationships between Test-Taking Strategies and Test Performance on a Graph-Writing Task: Implications for EAP

    ERIC Educational Resources Information Center

    Yang, Hui-Chun

    2012-01-01

    With the increasing use of integrated tasks in assessing writing, more and more research studies have been conducted to examine the construct validity of such tasks. Previous studies have largely focused on reading-writing tasks, while relatively little is known about graph-writing tasks. This study examines second language (L2) writers'…

  19. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  20. Random-effects models for serial observations with binary response

    SciTech Connect

    Stiratelli, R.; Laird, N.; Ware, J.H.

    1984-12-01

    This paper presents a general mixed model for the analysis of serial dichotomous responses provided by a panel of study participants. Each subject's serial responses are assumed to arise from a logistic model, but with regression coefficients that vary between subjects. The logistic regression parameters are assumed to be normally distributed in the population. Inference is based upon maximum likelihood estimation of fixed effects and variance components, and empirical Bayes estimation of random effects. Exact solutions are analytically and computationally infeasible, but an approximation based on the mode of the posterior distribution of the random parameters is proposed, and is implemented by means of the EM algorithm. This approximate method is compared with a simpler two-step method proposed by Korn and Whittemore, using data from a panel study of asthmatics originally described in that paper. One advantage of the estimation strategy described here is the ability to use all of the data, including that from subjects with insufficient data to permit fitting of a separate logistic regression model, as required by the Korn and Whittemore method. However, the new method is computationally intensive.

  1. Comprehensive analytical model to characterize randomness in optical waveguides.

    PubMed

    Zhou, Junhe; Gallion, Philippe

    2016-04-01

    In this paper, the coupled mode theory (CMT) is used to derive the corresponding stochastic differential equations (SDEs) for the modal amplitude evolution inside optical waveguides with random refractive index variations. Based on the SDEs, the ordinary differential equations (ODEs) are derived to analyze the statistics of the modal amplitudes, such as the optical power and power variations as well as the power correlation coefficients between the different modal powers. These ODEs can be solved analytically and therefore, it greatly simplifies the analysis. It is demonstrated that the ODEs for the power evolution of the modes are in excellent agreement with the Marcuse' coupled power model. The higher order statistics, such as the power variations and power correlation coefficients, which are not exactly analyzed in the Marcuse' model, are discussed afterwards. Monte-Carlo simulations are performed to demonstrate the validity of the analytical model.

  2. Zero temperature landscape of the random sine-Gordon model

    SciTech Connect

    Sanchez, A.; Bishop, A.R.; Cai, D.

    1997-04-01

    We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length ({approx} 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r){sup 2} type up to a certain correlation length ({xi} {approx} 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis.

  3. Modeling Gene Regulation in Liver Hepatocellular Carcinoma with Random Forests

    PubMed Central

    2016-01-01

    Liver hepatocellular carcinoma (HCC) remains a leading cause of cancer-related death. Poor understanding of the mechanisms underlying HCC prevents early detection and leads to high mortality. We developed a random forest model that incorporates copy-number variation, DNA methylation, transcription factor, and microRNA binding information as features to predict gene expression in HCC. Our model achieved a highly significant correlation between predicted and measured expression of held-out genes. Furthermore, we identified potential regulators of gene expression in HCC. Many of these regulators have been previously found to be associated with cancer and are differentially expressed in HCC. We also evaluated our predicted target sets for these regulators by making comparison with experimental results. Lastly, we found that the transcription factor E2F6, one of the candidate regulators inferred by our model, is predictive of survival rate in HCC. Results of this study will provide directions for future prospective studies in HCC.

  4. Markov-random-field modeling for linear seismic tomography.

    PubMed

    Kuwatani, Tatsu; Nagata, Kenji; Okada, Masato; Toriumi, Mitsuhiro

    2014-10-01

    We apply the Markov-random-field model to linear seismic tomography and propose a method to estimate the hyperparameters for the smoothness and the magnitude of the noise. Optimal hyperparameters can be determined analytically by minimizing the free energy function, which is defined by marginalizing the evaluation function. In synthetic inversion tests under various settings, the assumed velocity structures are successfully reconstructed, which shows the effectiveness and robustness of the proposed method. The proposed mathematical framework can be applied to inversion problems in various fields in the natural sciences.

  5. Markov-random-field modeling for linear seismic tomography

    NASA Astrophysics Data System (ADS)

    Kuwatani, Tatsu; Nagata, Kenji; Okada, Masato; Toriumi, Mitsuhiro

    2014-10-01

    We apply the Markov-random-field model to linear seismic tomography and propose a method to estimate the hyperparameters for the smoothness and the magnitude of the noise. Optimal hyperparameters can be determined analytically by minimizing the free energy function, which is defined by marginalizing the evaluation function. In synthetic inversion tests under various settings, the assumed velocity structures are successfully reconstructed, which shows the effectiveness and robustness of the proposed method. The proposed mathematical framework can be applied to inversion problems in various fields in the natural sciences.

  6. Bouchaud-Mézard model on a random network

    NASA Astrophysics Data System (ADS)

    Ichinomiya, Takashi

    2012-09-01

    We studied the Bouchaud-Mézard (BM) model, which was introduced to explain Pareto's law in a real economy, on a random network. Using “adiabatic and independent” assumptions, we analytically obtained the stationary probability distribution function of wealth. The results show that wealth condensation, indicated by the divergence of the variance of wealth, occurs at a larger J than that obtained by the mean-field theory, where J represents the strength of interaction between agents. We compared our results with numerical simulation results and found that they were in good agreement.

  7. An adaptive grid for graph-based segmentation in retinal OCT

    PubMed Central

    Lang, Andrew; Carass, Aaron; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

    2016-01-01

    Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer’s thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer’s thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries.

  8. Random interface growth in a random environment: Renormalization group analysis of a simple model

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Kakin, P. I.

    2015-10-01

    We study the effects of turbulent mixing on the random growth of an interface in the problem of the deposition of a substance on a substrate. The growth is modeled by the well-known Kardar-Parisi-Zhang model. The turbulent advecting velocity field is modeled by the Kraichnan rapid-change ensemble: Gaussian statistics with the correlation function < vv> ∝ δ( t - tς ) k - d-ξ, where k is the wave number and ξ is a free parameter, 0 < ξ < 2. We study the effects of the fluid compressibility. Using the field theory renormalization group, we show that depending on the relation between the exponent ξ and the spatial dimension d, the system manifests different types of large-scale, long-time asymptotic behavior associated with four possible fixed points of the renormalization group equations. In addition to the known regimes (ordinary diffusion, the ordinary growth process, and a passively advected scalar field), we establish the existence of a new nonequilibrium universality class. We calculate the fixed-point coordinates and their stability regions and critical dimensions to the first order of the double expansion in ξ and ɛ = 2 - d (one-loop approximation). It turns out that for an incompressible fluid, the most realistic values ξ = 4/3 or ξ = 2 and d = 1 or d = 2 correspond to the case of a passive scalar field, where the nonlinearity of the Kardar-Parisi-Zhang model is irrelevant and the interface growth is completely determined by the turbulent transfer. If the compressibility becomes sufficiently strong, then a crossover occurs in the critical behavior, and these values of d and ξ are in the stability region of the new regime, where the advection and nonlinearity are both important. But the coordinates of the fixed point for this regime are in the unphysical region, and its physical interpretation hence remains an open problem.

  9. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  10. Mechanisms of evolution of avalanches in regular graphs.

    PubMed

    Handford, Thomas P; Pérez-Reche, Francisco J; Taraskin, Sergei N

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  11. Mechanisms of evolution of avalanches in regular graphs

    NASA Astrophysics Data System (ADS)

    Handford, Thomas P.; Pérez-Reche, Francisco J.; Taraskin, Sergei N.

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  12. Rigorously testing multialternative decision field theory against random utility models.

    PubMed

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions.

  13. Neural complexity: a graph theoretic interpretation.

    PubMed

    Barnett, L; Buckley, C L; Bullock, S

    2011-04-01

    One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end, Tononi et al. [Proc. Natl. Acad. Sci. USA. 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular, we explicitly establish a dependency of neural complexity on cyclic graph motifs.

  14. Complex Networks: from Graph Theory to Biology

    NASA Astrophysics Data System (ADS)

    Lesne, Annick

    2006-12-01

    The aim of this text is to show the central role played by networks in complex system science. A remarkable feature of network studies is to lie at the crossroads of different disciplines, from mathematics (graph theory, combinatorics, probability theory) to physics (statistical physics of networks) to computer science (network generating algorithms, combinatorial optimization) to biological issues (regulatory networks). New paradigms recently appeared, like that of ‘scale-free networks’ providing an alternative to the random graph model introduced long ago by Erdös and Renyi. With the notion of statistical ensemble and methods originally introduced for percolation networks, statistical physics is of high relevance to get a deep account of topological and statistical properties of a network. Then their consequences on the dynamics taking place in the network should be investigated. Impact of network theory is huge in all natural sciences, especially in biology with gene networks, metabolic networks, neural networks or food webs. I illustrate this brief overview with a recent work on the influence of network topology on the dynamics of coupled excitable units, and the insights it provides about network emerging features, robustness of network behaviors, and the notion of static or dynamic motif.

  15. Graph500 in OpenSHMEM

    SciTech Connect

    D'Azevedo, Ed F; Imam, Neena

    2015-01-01

    This document describes the effort to implement the Graph 500 benchmark using OpenSHMEM based on the MPI-2 one-side version. The Graph 500 benchmark performs a breadth-first search in parallel on a large randomly generated undirected graph and can be implemented using basic MPI-1 and MPI-2 one-sided communication. Graph 500 requires atomic bit-wise operations on unsigned long integers but neither atomic bit-wise operations nor OpenSHMEM for unsigned long are available in OpenSHEM. Such needed bit-wise atomic operations and support for unsigned long are implemented using atomic condition swap (CSWAP) on signed long integers. Preliminary results on comparing the OpenSHMEM and MPI-2 one-sided implementations on a Silicon Graphics Incorporated (SGI) cluster and the Cray XK7 are presented.

  16. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.

  17. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data. PMID:25460087

  18. Box graphs and singular fibers

    NASA Astrophysics Data System (ADS)

    Hayashi, Hirotaka; Lawrie, Craig; Morrison, David R.; Schafer-Nameki, Sakura

    2014-05-01

    We determine the higher codimension fibers of elliptically fibered Calabi-Yau fourfolds with section by studying the three-dimensional = 2 supersymmetric gauge theory with matter which describes the low energy effective theory of M-theory compactified on the associated Weierstrass model, a singular model of the fourfold. Each phase of the Coulomb branch of this theory corresponds to a particular resolution of the Weierstrass model, and we show that these have a concise description in terms of decorated box graphs based on the representation graph of the matter multiplets, or alternatively by a class of convex paths on said graph. Transitions between phases have a simple interpretation as "flopping" of the path, and in the geometry correspond to actual flop transitions. This description of the phases enables us to enumerate and determine the entire network between them, with various matter representations for all reductive Lie groups. Furthermore, we observe that each network of phases carries the structure of a (quasi-)minuscule representation of a specific Lie algebra. Interpreted from a geometric point of view, this analysis determines the generators of the cone of effective curves as well as the network of flop transitions between crepant resolutions of singular elliptic Calabi-Yau fourfolds. From the box graphs we determine all fiber types in codimensions two and three, and we find new, non-Kodaira, fiber types for E 6, E7 and E 8.

  19. Measuring Graph Comprehension, Critique, and Construction in Science

    ERIC Educational Resources Information Center

    Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.

    2016-01-01

    Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed…

  20. A random interacting network model for complex networks.

    PubMed

    Goswami, Bedartha; Shekatkar, Snehal M; Rheinwalt, Aljoscha; Ambika, G; Kurths, Jürgen

    2015-01-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032

  1. A random interacting network model for complex networks

    NASA Astrophysics Data System (ADS)

    Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen

    2015-12-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems.

  2. Expert interpretation of bar and line graphs: the role of graphicacy in reducing the effect of graph format.

    PubMed

    Peebles, David; Ali, Nadia

    2015-01-01

    The distinction between informational and computational equivalence of representations, first articulated by Larkin and Simon (1987) has been a fundamental principle in the analysis of diagrammatic reasoning which has been supported empirically on numerous occasions. We present an experiment that investigates this principle in relation to the performance of expert graph users of 2 × 2 "interaction" bar and line graphs. The study sought to determine whether expert interpretation is affected by graph format in the same way that novice interpretations are. The findings revealed that, unlike novices-and contrary to the assumptions of several graph comprehension models-experts' performance was the same for both graph formats, with their interpretation of bar graphs being no worse than that for line graphs. We discuss the implications of the study for guidelines for presenting such data and for models of expert graph comprehension.

  3. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  4. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method. PMID:21096819

  5. Spatial Markov model of anomalous transport through random lattice networks.

    PubMed

    Kang, Peter K; Dentz, Marco; Le Borgne, Tanguy; Juanes, Ruben

    2011-10-28

    Flow through lattice networks with quenched disorder exhibits a strong correlation in the velocity field, even if the link transmissivities are uncorrelated. This feature, which is a consequence of the divergence-free constraint, induces anomalous transport of passive particles carried by the flow. We propose a Lagrangian statistical model that takes the form of a continuous time random walk with correlated velocities derived from a genuinely multidimensional Markov process in space. The model captures the anomalous (non-Fickian) longitudinal and transverse spreading, and the tail of the mean first-passage time observed in the Monte Carlo simulations of particle transport. We show that reproducing these fundamental aspects of transport in disordered systems requires honoring the correlation in the Lagrangian velocity.

  6. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  7. Monotonic entropy growth for a nonlinear model of random exchanges

    NASA Astrophysics Data System (ADS)

    Apenko, S. M.

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  8. Topologies on directed graphs

    NASA Technical Reports Server (NTRS)

    Lieberman, R. N.

    1972-01-01

    Given a directed graph, a natural topology is defined and relationships between standard topological properties and graph theoretical concepts are studied. In particular, the properties of connectivity and separatedness are investigated. A metric is introduced which is shown to be related to separatedness. The topological notions of continuity and homeomorphism. A class of maps is studied which preserve both graph and topological properties. Applications involving strong maps and contractions are also presented.

  9. RIM: A Random Item Mixture Model to Detect Differential Item Functioning

    ERIC Educational Resources Information Center

    Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…

  10. Graph Generator Survey

    SciTech Connect

    Lothian, Josh; Powers, Sarah S; Sullivan, Blair D; Baker, Matthew B; Schrock, Jonathan; Poole, Stephen W

    2013-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allowed the emulation of dierent application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report examines existing synthetic graph generator implementations in preparation for further study on the properties of their generated synthetic graphs.

  11. mpiGraph

    2007-05-22

    MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. Itmore » is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.« less

  12. Multi-A Graph Patrolling and Partitioning

    NASA Astrophysics Data System (ADS)

    Elor, Y.; Bruckstein, A. M.

    2012-12-01

    We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.

  13. Percolation on correlated random networks

    NASA Astrophysics Data System (ADS)

    Agliari, E.; Cioli, C.; Guadagnini, E.

    2011-09-01

    We consider a class of random, weighted networks, obtained through a redefinition of patterns in an Hopfield-like model, and, by performing percolation processes, we get information about topology and resilience properties of the networks themselves. Given the weighted nature of the graphs, different kinds of bond percolation can be studied: stochastic (deleting links randomly) and deterministic (deleting links based on rank weights), each mimicking a different physical process. The evolution of the network is accordingly different, as evidenced by the behavior of the largest component size and of the distribution of cluster sizes. In particular, we can derive that weak ties are crucial in order to maintain the graph connected and that, when they are the most prone to failure, the giant component typically shrinks without abruptly breaking apart; these results have been recently evidenced in several kinds of social networks.

  14. Graphs, matrices, and the GraphBLAS: Seven good reasons

    DOE PAGES

    Kepner, Jeremy; Bader, David; Buluç, Aydın; Gilbert, John; Mattson, Timothy; Meyerhenke, Henning

    2015-01-01

    The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implementmore » a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.« less

  15. Graphs, matrices, and the GraphBLAS: Seven good reasons

    SciTech Connect

    Kepner, Jeremy; Bader, David; Buluç, Aydın; Gilbert, John; Mattson, Timothy; Meyerhenke, Henning

    2015-01-01

    The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.

  16. Hedonic travel cost and random utility models of recreation

    SciTech Connect

    Pendleton, L.; Mendelsohn, R.; Davis, E.W.

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  17. Spin-glass phase transitions and minimum energy of the random feedback vertex set problem

    NASA Astrophysics Data System (ADS)

    Qin, Shao-Meng; Zeng, Ying; Zhou, Hai-Jun

    2016-08-01

    A feedback vertex set (FVS) of an undirected graph contains vertices from every cycle of this graph. Constructing a FVS of sufficiently small cardinality is very difficult in the worst cases, but for random graphs this problem can be efficiently solved by converting it into an appropriate spin-glass model [H.-J. Zhou, Eur. Phys. J. B 86, 455 (2013), 10.1140/epjb/e2013-40690-1]. In the present work we study the spin-glass phase transitions and the minimum energy density of the random FVS problem by the first-step replica-symmetry-breaking (1RSB) mean-field theory. For both regular random graphs and Erdös-Rényi graphs, we determine the inverse temperature βl at which the replica-symmetric mean-field theory loses its local stability, the inverse temperature βd of the dynamical (clustering) phase transition, and the inverse temperature βs of the static (condensation) phase transition. These critical inverse temperatures all change with the mean vertex degree in a nonmonotonic way, and βd is distinct from βs for regular random graphs of vertex degrees K >60 , while βd are identical to βs for Erdös-Rényi graphs at least up to mean vertex degree c =512 . We then derive the zero-temperature limit of the 1RSB theory and use it to compute the minimum FVS cardinality.

  18. Spin-glass phase transitions and minimum energy of the random feedback vertex set problem.

    PubMed

    Qin, Shao-Meng; Zeng, Ying; Zhou, Hai-Jun

    2016-08-01

    A feedback vertex set (FVS) of an undirected graph contains vertices from every cycle of this graph. Constructing a FVS of sufficiently small cardinality is very difficult in the worst cases, but for random graphs this problem can be efficiently solved by converting it into an appropriate spin-glass model [H.-J. Zhou, Eur. Phys. J. B 86, 455 (2013)EPJBFY1434-602810.1140/epjb/e2013-40690-1]. In the present work we study the spin-glass phase transitions and the minimum energy density of the random FVS problem by the first-step replica-symmetry-breaking (1RSB) mean-field theory. For both regular random graphs and Erdös-Rényi graphs, we determine the inverse temperature β_{l} at which the replica-symmetric mean-field theory loses its local stability, the inverse temperature β_{d} of the dynamical (clustering) phase transition, and the inverse temperature β_{s} of the static (condensation) phase transition. These critical inverse temperatures all change with the mean vertex degree in a nonmonotonic way, and β_{d} is distinct from β_{s} for regular random graphs of vertex degrees K>60, while β_{d} are identical to β_{s} for Erdös-Rényi graphs at least up to mean vertex degree c=512. We then derive the zero-temperature limit of the 1RSB theory and use it to compute the minimum FVS cardinality. PMID:27627285

  19. Expert interpretation of bar and line graphs: the role of graphicacy in reducing the effect of graph format

    PubMed Central

    Peebles, David; Ali, Nadia

    2015-01-01

    The distinction between informational and computational equivalence of representations, first articulated by Larkin and Simon (1987) has been a fundamental principle in the analysis of diagrammatic reasoning which has been supported empirically on numerous occasions. We present an experiment that investigates this principle in relation to the performance of expert graph users of 2 × 2 “interaction” bar and line graphs. The study sought to determine whether expert interpretation is affected by graph format in the same way that novice interpretations are. The findings revealed that, unlike novices—and contrary to the assumptions of several graph comprehension models—experts' performance was the same for both graph formats, with their interpretation of bar graphs being no worse than that for line graphs. We discuss the implications of the study for guidelines for presenting such data and for models of expert graph comprehension. PMID:26579052

  20. Multilayer neural network with randomized learning: Model and applications

    SciTech Connect

    Terekhov, S.A.

    1995-03-01

    A randomized annealing-simulation scheme for learning by a multilayer neural network is examined. The fractal properties of the learning trajectories in the phase space of the network are studied. It is proposed that the learning temperature be controlled by the phenomenological characteristics of the trajectory. The errors of generalization of a multilayer perceptron and of the method of splines in time-series prediction are compared. It is shown that for nonsmooth functions that generate a stochastic time series, a neural network is preferable to a spline for the same number of parameters. The learning scheme is used to construct a cybernetic neural-network model of the phenomenon of magnetic implosion as well as for fault classification in the coolant system of a nuclear-power-plant reactor.

  1. Local random potentials of high differentiability to model the Landscape

    SciTech Connect

    Battefeld, T.; Modi, C.

    2015-03-09

    We generate random functions locally via a novel generalization of Dyson Brownian motion, such that the functions are in a desired differentiability class C{sup k}, while ensuring that the Hessian is a member of the Gaussian orthogonal ensemble (other ensembles might be chosen if desired). Potentials in such higher differentiability classes (k≥2) are required/desirable to model string theoretical landscapes, for instance to compute cosmological perturbations (e.g., k=2 for the power-spectrum) or to search for minima (e.g., suitable de Sitter vacua for our universe). Since potentials are created locally, numerical studies become feasible even if the dimension of field space is large (D∼100). In addition to the theoretical prescription, we provide some numerical examples to highlight properties of such potentials; concrete cosmological applications will be discussed in companion publications.

  2. Efficient Ab initio Modeling of Random Multicomponent Alloys.

    PubMed

    Jiang, Chao; Uberuaga, Blas P

    2016-03-11

    We present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multicomponent alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high-entropy alloy chemistries. The SSOS method developed here can be broadly useful for the rapid computational design of multicomponent materials, especially those with a large number of alloying elements, a challenging problem for other approaches. PMID:27015491

  3. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  4. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  5. Random field Ising model and community structure in complex networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Jeong, H.; Noh, J. D.

    2006-04-01

    We propose a method to determine the community structure of a complex network. In this method the ground state problem of a ferromagnetic random field Ising model is considered on the network with the magnetic field Bs = +∞, Bt = -∞, and Bi≠s,t=0 for a node pair s and t. The ground state problem is equivalent to the so-called maximum flow problem, which can be solved exactly numerically with the help of a combinatorial optimization algorithm. The community structure is then identified from the ground state Ising spin domains for all pairs of s and t. Our method provides a criterion for the existence of the community structure, and is applicable equally well to unweighted and weighted networks. We demonstrate the performance of the method by applying it to the Barabási-Albert network, Zachary karate club network, the scientific collaboration network, and the stock price correlation network. (Ising, Potts, etc.)

  6. Real World Graph Connectivity

    ERIC Educational Resources Information Center

    Lind, Joy; Narayan, Darren

    2009-01-01

    We present the topic of graph connectivity along with a famous theorem of Menger in the real-world setting of the national computer network infrastructure of "National LambdaRail". We include a set of exercises where students reinforce their understanding of graph connectivity by analysing the "National LambdaRail" network. Finally, we give…

  7. Walking Out Graphs

    ERIC Educational Resources Information Center

    Shen, Ji

    2009-01-01

    In the Walking Out Graphs Lesson described here, students experience several types of representations used to describe motion, including words, sentences, equations, graphs, data tables, and actions. The most important theme of this lesson is that students have to understand the consistency among these representations and form the habit of…

  8. Reflections on "The Graph"

    ERIC Educational Resources Information Center

    Petrosino, Anthony

    2012-01-01

    This article responds to arguments by Skidmore and Thompson (this issue of "Educational Researcher") that a graph published more than 10 years ago was erroneously reproduced and "gratuitously damaged" perceptions of the quality of education research. After describing the purpose of the original graph, the author counters assertions that the graph…

  9. Exploring Graphs: WYSIWYG.

    ERIC Educational Resources Information Center

    Johnson, Millie

    1997-01-01

    Graphs from media sources and questions developed from them can be used in the middle school mathematics classroom. Graphs depict storage temperature on a milk carton; air pressure measurements on a package of shock absorbers; sleep-wake patterns of an infant; a dog's breathing patterns; and the angle, velocity, and radius of a leaning bicyclist…

  10. Making "Photo" Graphs

    ERIC Educational Resources Information Center

    Doto, Julianne; Golbeck, Susan

    2007-01-01

    Collecting data and analyzing the results of experiments is difficult for children. The authors found a surprising way to help their third graders make graphs and draw conclusions from their data: digital photographs. The pictures bridged the gap between an abstract graph and the plants it represented. With the support of the photos, students…

  11. ACTIVITIES: Graphs and Games

    ERIC Educational Resources Information Center

    Hirsch, Christian R.

    1975-01-01

    Using a set of worksheets, students will discover and apply Euler's formula regarding connected planar graphs and play and analyze the game of Sprouts. One sheet leads to the discovery of Euler's formula; another concerns traversability of a graph; another gives an example and a game involving these ideas. (Author/KM)

  12. A Graph Search Heuristic for Shortest Distance Paths

    SciTech Connect

    Chow, E

    2005-03-24

    This paper presents a heuristic for guiding A* search for finding the shortest distance path between two vertices in a connected, undirected, and explicitly stored graph. The heuristic requires a small amount of data to be stored at each vertex. The heuristic has application to quickly detecting relationships between two vertices in a large information or knowledge network. We compare the performance of this heuristic with breadth-first search on graphs with various topological properties. The results show that one or more orders of magnitude improvement in the number of vertices expanded is possible for large graphs, including Poisson random graphs.

  13. Communication Graph Generator for Parallel Programs

    SciTech Connect

    2014-04-08

    Graphator is a collection of relatively simple sequential programs that generate communication graphs/matrices for commonly occurring patterns in parallel programs. Currently, there is support for five communication patterns: two-dimensional 4-point stencil, four-dimensional 8-point stencil, all-to-alls over sub-communicators, random near-neighbor communication, and near-neighbor communication.

  14. Continuous Time Group Discovery in Dynamic Graphs

    SciTech Connect

    Miller, K; Eliassi-Rad, T

    2010-11-04

    With the rise in availability and importance of graphs and networks, it has become increasingly important to have good models to describe their behavior. While much work has focused on modeling static graphs, we focus on group discovery in dynamic graphs. We adapt a dynamic extension of Latent Dirichlet Allocation to this task and demonstrate good performance on two datasets. Modeling relational data has become increasingly important in recent years. Much work has focused on static graphs - that is fixed graphs at a single point in time. Here we focus on the problem of modeling dynamic (i.e. time-evolving) graphs. We propose a scalable Bayesian approach for community discovery in dynamic graphs. Our approach is based on extensions of Latent Dirichlet Allocation (LDA). LDA is a latent variable model for topic modeling in text corpora. It was extended to deal with topic changes in discrete time and later in continuous time. These models were referred to as the discrete Dynamic Topic Model (dDTM) and the continuous Dynamic Topic Model (cDTM), respectively. When adapting these models to graphs, we take our inspiration from LDA-G and SSN-LDA, applications of LDA to static graphs that have been shown to effectively factor out community structure to explain link patterns in graphs. In this paper, we demonstrate how to adapt and apply the cDTM to the task of finding communities in dynamic networks. We use link prediction to measure the quality of the discovered community structure and apply it to two different relational datasets - DBLP author-keyword and CAIDA autonomous systems relationships. We also discuss a parallel implementation of this approach using Hadoop. In Section 2, we review LDA and LDA-G. In Section 3, we review the cDTM and introduce cDTMG, its adaptation to modeling dynamic graphs. We discuss inference for the cDTM-G and details of our parallel implementation in Section 4 and present its performance on two datasets in Section 5 before concluding in

  15. Modeling X Chromosome Data Using Random Forests: Conquering Sex Bias.

    PubMed

    Winham, Stacey J; Jenkins, Gregory D; Biernacka, Joanna M

    2016-02-01

    Machine learning methods, including Random Forests (RF), are increasingly used for genetic data analysis. However, the standard RF algorithm does not correctly model the effects of X chromosome single nucleotide polymorphisms (SNPs), leading to biased estimates of variable importance. We propose extensions of RF to correctly model X SNPs, including a stratified approach and an approach based on the process of X chromosome inactivation. We applied the new and standard RF approaches to case-control alcohol dependence data from the Study of Addiction: Genes and Environment (SAGE), and compared the performance of the alternative approaches via a simulation study. Standard RF applied to a case-control study of alcohol dependence yielded inflated variable importance estimates for X SNPs, even when sex was included as a variable, but the results of the new RF methods were consistent with univariate regression-based approaches that correctly model X chromosome data. Simulations showed that the new RF methods eliminate the bias in standard RF variable importance for X SNPs when sex is associated with the trait, and are able to detect causal autosomal and X SNPs. Even in the absence of sex effects, the new extensions perform similarly to standard RF. Thus, we provide a powerful multimarker approach for genetic analysis that accommodates X chromosome data in an unbiased way. This method is implemented in the freely available R package "snpRF" (http://www.cran.r-project.org/web/packages/snpRF/). PMID:26639183

  16. Modeling X Chromosome Data Using Random Forests: Conquering Sex Bias.

    PubMed

    Winham, Stacey J; Jenkins, Gregory D; Biernacka, Joanna M

    2016-02-01

    Machine learning methods, including Random Forests (RF), are increasingly used for genetic data analysis. However, the standard RF algorithm does not correctly model the effects of X chromosome single nucleotide polymorphisms (SNPs), leading to biased estimates of variable importance. We propose extensions of RF to correctly model X SNPs, including a stratified approach and an approach based on the process of X chromosome inactivation. We applied the new and standard RF approaches to case-control alcohol dependence data from the Study of Addiction: Genes and Environment (SAGE), and compared the performance of the alternative approaches via a simulation study. Standard RF applied to a case-control study of alcohol dependence yielded inflated variable importance estimates for X SNPs, even when sex was included as a variable, but the results of the new RF methods were consistent with univariate regression-based approaches that correctly model X chromosome data. Simulations showed that the new RF methods eliminate the bias in standard RF variable importance for X SNPs when sex is associated with the trait, and are able to detect causal autosomal and X SNPs. Even in the absence of sex effects, the new extensions perform similarly to standard RF. Thus, we provide a powerful multimarker approach for genetic analysis that accommodates X chromosome data in an unbiased way. This method is implemented in the freely available R package "snpRF" (http://www.cran.r-project.org/web/packages/snpRF/).

  17. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  18. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.

  19. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology. PMID:25781960

  20. Asthma self-management model: randomized controlled trial.

    PubMed

    Olivera, Carolina M X; Vianna, Elcio Oliveira; Bonizio, Roni C; de Menezes, Marcelo B; Ferraz, Erica; Cetlin, Andrea A; Valdevite, Laura M; Almeida, Gustavo A; Araujo, Ana S; Simoneti, Christian S; de Freitas, Amanda; Lizzi, Elisangela A; Borges, Marcos C; de Freitas, Osvaldo

    2016-10-01

    Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by a pharmacist (intervention group) and 59 patients in the control group. Data collection was performed before and after this 4-month intervention and included an evaluation of asthma knowledge, lifestyle, inhaler techniques, adhesion to treatment, pulmonary function and quality of life. An economic viability analysis was also performed. The intervention group obtained an increase in asthma knowledge scores of 58.3-79.5% (P < 0.001). In this group, there was also an increase in the number of individuals who practiced physical exercise (36-43%), in the number of correct replies regarding the use of inhalers, in the percentage of adherent patients, and in quality of life scores for all domains. We concluded that this asthma self-management model was effective in improving the quality of life of asthma patients. PMID:27473571

  1. Time series, correlation matrices and random matrix models

    SciTech Connect

    Vinayak; Seligman, Thomas H.

    2014-01-08

    In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series. By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.

  2. Graph states for quantum secret sharing

    NASA Astrophysics Data System (ADS)

    Markham, Damian; Sanders, Barry C.

    2008-10-01

    We consider three broad classes of quantum secret sharing with and without eavesdropping and show how a graph state formalism unifies otherwise disparate quantum secret sharing models. In addition to the elegant unification provided by graph states, our approach provides a generalization of threshold classical secret sharing via insecure quantum channels beyond the current requirement of 100% collaboration by players to just a simple majority in the case of five players. Another innovation here is the introduction of embedded protocols within a larger graph state that serves as a one-way quantum-information processing system.

  3. Distance-based topological polynomials and indices of friendship graphs.

    PubMed

    Gao, Wei; Farahani, Mohammad Reza; Imran, Muhammad; Rajesh Kanna, M R

    2016-01-01

    Drugs and chemical compounds are often modeled as graphs in which the each vertex of the graph expresses an atom of molecule and covalent bounds between atoms are represented by the edges between their corresponding vertices. The topological indicators defined over this molecular graph have been shown to be strongly correlated to various chemical properties of the compounds. In this article, by means of graph structure analysis, we determine several distance based topological indices of friendship graph [Formula: see text] which is widely appeared in various classes of new nanomaterials, drugs and chemical compounds. PMID:27652136

  4. StreamWorks - A system for Dynamic Graph Search

    SciTech Connect

    Choudhury, Sutanay; Holder, Larry; Chin, George; Ray, Abhik; Beus, Sherman J.; Feo, John T.

    2013-06-11

    Acting on time-critical events by processing ever growing social media, news or cyber data streams is a major technical challenge. Many of these data sources can be modeled as multi-relational graphs. Mining and searching for subgraph patterns in a continuous setting requires an efficient approach to incremental graph search. The goal of our work is to enable real-time search capabilities for graph databases. This demonstration will present a dynamic graph query system that leverages the structural and semantic characteristics of the underlying multi-relational graph.

  5. Analysis of processes used by middle-school students to interpret functions embedded in dynamic physical models and represented in tables, equations, and graphs

    NASA Astrophysics Data System (ADS)

    Hines, Mary Ellen

    This dissertation examined the processes generated by eighth-grade students to interpret and represent the functions embedded in dynamic physical models and the instructional decisions that facilitated the processes. Using the teaching experiment method, students were paired to interactively explore a slack rope board. The slack rope board consisted of a string that had been attached to a corkboard and that could be pulled taut to generate two segments of varying length. Students also explored the spool elevating system where an object was attached to a spool by string. The object could be raised or lowered by turning a handle attached to the spool. In each case, students identified variables, selected symbols to represent variables, and generated tables, equations, and graphs of observed functions. Students' equations were often treated as records of the action or relationship of the functions embedded in the dynamic physical models. Some students were reluctant to algebraically manipulate their equations because the record of the action or relationship was lost. Students' graphs, created using an Etch-a-Sketch, were generated via direct translation of the action of a dynamic physical model to the knobs of the Etch-a-Sketch. Several students developed a rate to accomplish the translation. In verbally presented real-world scenarios which differed in context from that of dynamic physical models but which had underlying structures similar to those of functions embedded in dynamic physical models, it was found that students focused on the underlying structure. The learning environment created by interactive use of the dynamic physical models supported development of instruction through which students' thinking could be challenged and through which students could connect their knowledge of functions generated in one representation to another. Use of the dynamic physical models enabled students to interpret functions as repeated actions with several students

  6. Bond graph modeling and experimental verification of a novel scheme for fault diagnosis of rolling element bearings in special operating conditions

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-09-01

    Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.

  7. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  8. Object Discovery: Soft Attributed Graph Mining.

    PubMed

    Zhang, Quanshi; Song, Xuan; Shao, Xiaowei; Zhao, Huijing; Shibasaki, Ryosuke

    2016-03-01

    We categorize this research in terms of its contribution to both graph theory and computer vision. From the theoretical perspective, this study can be considered as the first attempt to formulate the idea of mining maximal frequent subgraphs in the challenging domain of messy visual data, and as a conceptual extension to the unsupervised learning of graph matching. We define a soft attributed pattern (SAP) to represent the common subgraph pattern among a set of attributed relational graphs (ARGs), considering both their structure and attributes. Regarding the differences between ARGs with fuzzy attributes and conventional labeled graphs, we propose a new mining strategy that directly extracts the SAP with the maximal graph size without applying node enumeration. Given an initial graph template and a number of ARGs, we develop an unsupervised method to modify the graph template into the maximal-size SAP. From a practical perspective, this research develops a general platform for learning the category model (i.e., the SAP) from cluttered visual data (i.e., the ARGs) without labeling "what is where," thereby opening the possibility for a series of applications in the era of big visual data. Experiments demonstrate the superior performance of the proposed method on RGB/RGB-D images and videos.

  9. Object Discovery: Soft Attributed Graph Mining.

    PubMed

    Zhang, Quanshi; Song, Xuan; Shao, Xiaowei; Zhao, Huijing; Shibasaki, Ryosuke

    2016-03-01

    We categorize this research in terms of its contribution to both graph theory and computer vision. From the theoretical perspective, this study can be considered as the first attempt to formulate the idea of mining maximal frequent subgraphs in the challenging domain of messy visual data, and as a conceptual extension to the unsupervised learning of graph matching. We define a soft attributed pattern (SAP) to represent the common subgraph pattern among a set of attributed relational graphs (ARGs), considering both their structure and attributes. Regarding the differences between ARGs with fuzzy attributes and conventional labeled graphs, we propose a new mining strategy that directly extracts the SAP with the maximal graph size without applying node enumeration. Given an initial graph template and a number of ARGs, we develop an unsupervised method to modify the graph template into the maximal-size SAP. From a practical perspective, this research develops a general platform for learning the category model (i.e., the SAP) from cluttered visual data (i.e., the ARGs) without labeling "what is where," thereby opening the possibility for a series of applications in the era of big visual data. Experiments demonstrate the superior performance of the proposed method on RGB/RGB-D images and videos. PMID:27046496

  10. Communication-efficient parallel-graph algorithms. Master's thesis

    SciTech Connect

    Maggs, B.M.

    1986-06-01

    Communication bandwidth is a resource ignored by most parallel random-access machine (PRAM) models. This thesis shows that many graph problems can be solved in parallel, not only with polylogarithmic performance, but with efficient communication at each step of the computation. The communication requirements of an algorithm are measured in a restricted PRAM model called the distributed random-access machine (DRAM), which can be viewed as an abstraction of volume-universal networks such as fat trees. In this model, communication cost is measured in terms of the congestion of memory accesses across cuts of the machine. It is demonstrated that the recursive doubling technique frequently used in PRAM algorithms is wasteful of communication resources, and that recursive pairing can be used to perform many of the same functions more efficiently. The prefix computation is generalized on linear lists to trees and show that these tree-fix computations, which can be performed in a communication-efficient fashion using a variant of the tree-contraction technique of Miller and Reif, simplify many parallel graph algorithms in the literature.

  11. Force Limited Random Vibration Test of TESS Camera Mass Model

    NASA Technical Reports Server (NTRS)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  12. Accelerating semantic graph databases on commodity clusters

    SciTech Connect

    Morari, Alessandro; Castellana, Vito G.; Haglin, David J.; Feo, John T.; Weaver, Jesse R.; Tumeo, Antonino; Villa, Oreste

    2013-10-06

    We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.

  13. A software tool for dataflow graph scheduling

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1994-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.

  14. A study on vague graphs.

    PubMed

    Rashmanlou, Hossein; Samanta, Sovan; Pal, Madhumangal; Borzooei, R A

    2016-01-01

    The main purpose of this paper is to introduce the notion of vague h-morphism on vague graphs and regular vague graphs. The action of vague h-morphism on vague strong regular graphs are studied. Some elegant results on weak and co weak isomorphism are derived. Also, [Formula: see text]-complement of highly irregular vague graphs are defined. PMID:27536517

  15. A Semantic Graph Query Language

    SciTech Connect

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  16. Line graphs as social networks

    NASA Astrophysics Data System (ADS)

    Krawczyk, M. J.; Muchnik, L.; Mańka-Krasoń, A.; Kułakowski, K.

    2011-07-01

    It was demonstrated recently that the line graphs are clustered and assortative. These topological features are known to characterize some social networks [M.E.J. Newman, Y. Park, Why social networks are different from other types of networks, Phys. Rev. E 68 (2003) 036122]; it was argued that this similarity reveals their cliquey character. In the model proposed here, a social network is the line graph of an initial network of families, communities, interest groups, school classes and small companies. These groups play the role of nodes, and individuals are represented by links between these nodes. The picture is supported by the data on the LiveJournal network of about 8×10 6 people.

  17. Commuting projections on graphs

    SciTech Connect

    Vassilevski, Panayot S.; Zikatanov, Ludmil T.

    2013-02-19

    For a given (connected) graph, we consider vector spaces of (discrete) functions defined on its vertices and its edges. These two spaces are related by a discrete gradient operator, Grad and its adjoint, ₋Div, referred to as (negative) discrete divergence. We also consider a coarse graph obtained by aggregation of vertices of the original one. Then a coarse vertex space is identified with the subspace of piecewise constant functions over the aggregates. We consider the ℓ2-projection QH onto the space of these piecewise constants. In the present paper, our main result is the construction of a projection π H from the original edge-space onto a properly constructed coarse edge-space associated with the edges of the coarse graph. The projections π H and QH commute with the discrete divergence operator, i.e., we have div π H = QH div. The respective pair of coarse edge-space and coarse vertexspace offer the potential to construct two-level, and by recursion, multilevel methods for the mixed formulation of the graph Laplacian which utilizes the discrete divergence operator. The performance of one two-level method with overlapping Schwarz smoothing and correction based on the constructed coarse spaces for solving such mixed graph Laplacian systems is illustrated on a number of graph examples.

  18. Single-subject gray matter graph properties and their relationship with cognitive impairment in early- and late-onset Alzheimer's disease.

    PubMed

    Tijms, Betty M; Yeung, Hiu M; Sikkes, Sietske A M; Möller, Christiane; Smits, Lieke L; Stam, Cornelis J; Scheltens, Philip; van der Flier, Wiesje M; Barkhof, Frederik

    2014-06-01

    Abstract We investigated the relationships between gray matter graph properties and cognitive impairment in a sample of 215 patients with Alzheimer's disease (AD) and also whether age of disease onset modifies such relationships. We expected that more severe cognitive impairment in AD would be related to more random graph topologies. Single-subject gray matter graphs were constructed from T1-weighted magnetic resonance imaging scans. The following global and local graph properties were calculated: betweenness centrality, normalized clustering coefficient γ, and normalized path length λ. Local clustering, path length, and betweenness centrality measures were determined for 90 anatomically defined areas. Regression models with as interaction term age of onset (i.e., early onset when patients were ≤65 years old and late onset when they were >65 years old at the time of diagnosis)×graph property were used to assess the relationships between cognitive functioning in five domains (memory, language, visuospatial, attention, and executive). Worse cognitive impairment was associated with more random graphs, as indicated by low γ, λ, and betweenness centrality values. Three interaction effects for age of onset×global graph property were found: Low γ and λ values more strongly related to memory impairment in early-onset patients; low beta values were significantly related to impaired visuospatial functioning in late-onset patients. For the local graph properties, language impairment showed the strongest relationship with decreased clustering coefficient in the left superior temporal gyrus across the entire sample. Our study shows that single-subject gray matter graph properties are associated with individual differences in cognitive impairment.

  19. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  20. Frequent Subgraph Discovery in Large Attributed Streaming Graphs

    SciTech Connect

    Ray, Abhik; Holder, Larry; Choudhury, Sutanay

    2014-08-13

    The problem of finding frequent subgraphs in large dynamic graphs has so far only consid- ered a dynamic graph as being represented by a series of static snapshots taken at various points in time. This representation of a dynamic graph does not lend itself well to real time processing of real world graphs like social networks or internet traffic which consist of a stream of nodes and edges. In this paper we propose an algorithm that discovers the frequent subgraphs present in a graph represented by a stream of labeled nodes and edges. Our algorithm is efficient and consists of tunable parameters that can be tuned by the user to get interesting patterns from various kinds of graph data. In our model updates to the graph arrive in the form of batches which contain new nodes and edges. Our algorithm con- tinuously reports the frequent subgraphs that are estimated to be found in the entire graph as each batch arrives. We evaluate our system using 5 large dynamic graph datasets: the Hetrec 2011 challenge data, Twitter, DBLP and two synthetic. We evaluate our approach against two popular large graph miners, i.e., SUBDUE and GERM. Our experimental re- sults show that we can find the same frequent subgraphs as a non-incremental approach applied to snapshot graphs, and in less time.