Empirical Determination of Pattern Match Confidence in Labeled Graphs
2014-02-07
were explored; Erdős–Rényi [6] random graphs, Barabási–Albert preferential attachment graphs [2], and Watts– Strogatz [18] small world graphs. The ER...B. Erdos - Renyi Barabasi - Albert Gr ap h Ty pe Strogatz - Watts Direct Within 2 nodes Within 4 nodes Search Limit 1 10 100 1000 10000 100000 100...Barabási–Albert (BA, crosses) and Watts– Strogatz (WS, trian- gles) graphs were generated with sizes ranging from 50 to 2500 nodes, and labeled
Evolution of tag-based cooperation with emotion on complex networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2018-04-01
We study the evolution of the four strategies: Ethnocentric, altruistic, egoistic and cosmopolitan in one community of individuals through Monte Carlo simulations. Interactions and reproduction among computational agents are simulated on undirected Barabási-Albert (UBA) networks and Erdös-Rènyi random graphs (ER).We study the Hammond-Axelrod model on both UBA networks and ER random graphs for the asexual reproduction case. We use a modified version of the traditional Hammond-Axelrod model and we also allow the agents’ decisions about one of the strategies to take into account the emotion among their equals. Our simulations showed that egoism and altruism win, differently from other results found in the literature where ethnocentric strategy is common.
Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya
2011-12-01
We study the statistical behavior under random sequential renormalization (RSR) of several network models including Erdös-Rényi (ER) graphs, scale-free networks, and an annealed model related to ER graphs. In RSR the network is locally coarse grained by choosing at each renormalization step a node at random and joining it to all its neighbors. Compared to previous (quasi-)parallel renormalization methods [Song et al., Nature (London) 433, 392 (2005)], RSR allows a more fine-grained analysis of the renormalization group (RG) flow and unravels new features that were not discussed in the previous analyses. In particular, we find that all networks exhibit a second-order transition in their RG flow. This phase transition is associated with the emergence of a giant hub and can be viewed as a new variant of percolation, called agglomerative percolation. We claim that this transition exists also in previous graph renormalization schemes and explains some of the scaling behavior seen there. For critical trees it happens as N/N(0) → 0 in the limit of large systems (where N(0) is the initial size of the graph and N its size at a given RSR step). In contrast, it happens at finite N/N(0) in sparse ER graphs and in the annealed model, while it happens for N/N(0) → 1 on scale-free networks. Critical exponents seem to depend on the type of the graph but not on the average degree and obey usual scaling relations for percolation phenomena. For the annealed model they agree with the exponents obtained from a mean-field theory. At late times, the networks exhibit a starlike structure in agreement with the results of Radicchi et al. [Phys. Rev. Lett. 101, 148701 (2008)]. While degree distributions are of main interest when regarding the scheme as network renormalization, mass distributions (which are more relevant when considering "supernodes" as clusters) are much easier to study using the fast Newman-Ziff algorithm for percolation, allowing us to obtain very high statistics.
2011-06-03
distribution, p. The Erdos- Renyi model (E-R model) has been widely used in the past to capture the probability distributions of ADGs (Erdos and Renyi ...experimental data. Journal of the American Statistical Association, 103:778-789. Erdos, R and Renyi , A. (1959). On random graphs, I
Finite-size scaling of clique percolation on two-dimensional Moore lattices
NASA Astrophysics Data System (ADS)
Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong
2018-05-01
Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.
Communication Optimal Parallel Multiplication of Sparse Random Matrices
2013-02-21
Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That
Spread of information and infection on finite random networks
NASA Astrophysics Data System (ADS)
Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar
2011-04-01
The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.
Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests
Li, Yilei; Zhu, Zhencai; Chen, Guoan
2014-01-01
The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS) for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests. PMID:24967428
Extension of Strongly Regular Graphs
2008-02-11
E.R. van Dam, W.H. Haemers. Graphs with constant µ and µ. Discrete Math . 182 (1998), no. 1-3, 293–307. [11] E.R. van Dam, E. Spence. Small regular...graphs with four eigenvalues. Discrete Math . 189 (1998), 233-257. the electronic journal of combinatorics 15 (2008), #N3 5
1991-01-01
critical G’s/# G’s -) 0 as IV(G)I -- oo? References [B1] C. Berge, Regularizable graphs, Ann. Discrete Math ., 3, 1978, 11-19. [B2] _ _, Some common...Springer-Verlag, Berlin, 1980, 108-123. [B3] _ _, Some common properties for regularizable graphs, edge-critical graphs, and B-graphs, Ann. Discrete Math ., 12...graphs - an extension of the K6nig-Egervgiry theorem, Discrete Math ., 27, 1979, 23-33. [ER] M.N Ellingham and G.F. Royle, Well-covered cubic graphs
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
NASA Astrophysics Data System (ADS)
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
On Edge Exchangeable Random Graphs
NASA Astrophysics Data System (ADS)
Janson, Svante
2017-06-01
We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).
Community structure and scale-free collections of Erdős-Rényi graphs.
Seshadhri, C; Kolda, Tamara G; Pinar, Ali
2012-05-01
Community structure plays a significant role in the analysis of social networks and similar graphs, yet this structure is little understood and not well captured by most models. We formally define a community to be a subgraph that is internally highly connected and has no deeper substructure. We use tools of combinatorics to show that any such community must contain a dense Erdős-Rényi (ER) subgraph. Based on mathematical arguments, we hypothesize that any graph with a heavy-tailed degree distribution and community structure must contain a scale-free collection of dense ER subgraphs. These theoretical observations corroborate well with empirical evidence. From this, we propose the Block Two-Level Erdős-Rényi (BTER) model, and demonstrate that it accurately captures the observable properties of many real-world social networks.
Groupies in multitype random graphs.
Shang, Yilun
2016-01-01
A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
Navigability of Random Geometric Graphs in the Universe and Other Spacetimes.
Cunningham, William; Zuev, Konstantin; Krioukov, Dmitri
2017-08-18
Random geometric graphs in hyperbolic spaces explain many common structural and dynamical properties of real networks, yet they fail to predict the correct values of the exponents of power-law degree distributions observed in real networks. In that respect, random geometric graphs in asymptotically de Sitter spacetimes, such as the Lorentzian spacetime of our accelerating universe, are more attractive as their predictions are more consistent with observations in real networks. Yet another important property of hyperbolic graphs is their navigability, and it remains unclear if de Sitter graphs are as navigable as hyperbolic ones. Here we study the navigability of random geometric graphs in three Lorentzian manifolds corresponding to universes filled only with dark energy (de Sitter spacetime), only with matter, and with a mixture of dark energy and matter. We find these graphs are navigable only in the manifolds with dark energy. This result implies that, in terms of navigability, random geometric graphs in asymptotically de Sitter spacetimes are as good as random hyperbolic graphs. It also establishes a connection between the presence of dark energy and navigability of the discretized causal structure of spacetime, which provides a basis for a different approach to the dark energy problem in cosmology.
Brief Counseling and Exercise Referral Scheme: A Pragmatic Trial in Mexico.
Gallegos-Carrillo, Katia; García-Peña, Carmen; Salmerón, Jorge; Salgado-de-Snyder, Nelly; Lobelo, Felipe
2017-02-01
The effectiveness of clinical-community linkages for promotion of physical activity (PA) has not been explored in low- and middle-income countries. This study assessed the effectiveness of a primary care-based, 16-week intervention rooted in behavioral theory approaches to increase compliance with aerobic PA recommendations. Pragmatic cluster randomized trial. Patients had diagnosed (<5 years) hypertension, were aged 35-70 years, self-reported as physically inactive, had a stated intention to engage in PA, and attended Primary Healthcare Centers in the Social Security health system in Cuernavaca, Mexico. Of 23 Primary Healthcare Centers, four were selected based on proximity (5 km radius) to a center. Each center was randomized to a brief PA counseling (BC, n=2) or an exercise referral (ER, n=2) intervention. The study was conducted between 2011 and 2012. Change in objectively measured PA levels (ActiGraph GT3X accelerometers) at baseline, 16, and 24 weeks. Intention-to-treat analyses were used to assess the effectiveness of the intervention overall and according to ER intervention attendance. Longitudinal multilevel mixed-effects analyses considering the interaction (time by intervention) were conducted. Each model was also adjusted by baseline value of the outcome measure, demographic and health variables, social support, PA self-efficacy, and barriers. Minutes/week of objectively measured moderate to vigorous PA increased by 40 and 53 minutes in the ER and BC groups, respectively (p=0.59). Participants attending >50% of ER program sessions increased their moderate to vigorous PA by 104 minutes/week and compliance with aerobic PA recommendations by 23.8%, versus the BC group (both p<0.05). Both BC and ER led to modest improvements in PA levels, with no significant differences between groups. Adequate adherence with the ER program sessions led to significant improvements in compliance with aerobic PA recommendations versus BC. These results can help guide development and implementation of programs integrating standardized PA assessment, counseling, and referrals via clinical-community linkages in Mexico and other low- and middle-income countries in the region. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Using Combinatorica/Mathematica for Student Projects in Random Graph Theory
ERIC Educational Resources Information Center
Pfaff, Thomas J.; Zaret, Michele
2006-01-01
We give an example of a student project that experimentally explores a topic in random graph theory. We use the "Combinatorica" package in "Mathematica" to estimate the minimum number of edges needed in a random graph to have a 50 percent chance that the graph is connected. We provide the "Mathematica" code and compare it to the known theoretical…
Return probabilities and hitting times of random walks on sparse Erdös-Rényi graphs.
Martin, O C; Sulc, P
2010-03-01
We consider random walks on random graphs, focusing on return probabilities and hitting times for sparse Erdös-Rényi graphs. Using the tree approach, which is expected to be exact in the large graph limit, we show how to solve for the distribution of these quantities and we find that these distributions exhibit a form of self-similarity.
ERIC Educational Resources Information Center
Leikin, Roza; Leikin, Mark; Waisman, Ilana; Shaul, Shelley
2013-01-01
This study explores the effects of the "presence of external representations of a mathematical object" (ERs) on problem solving performance associated with short double-choice problems. The problems were borrowed from secondary school algebra and geometry, and the ERs were either formulas, graphs of functions, or drawings of geometric…
Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks
Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...
2013-07-01
Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
Probabilistic generation of random networks taking into account information on motifs occurrence.
Bois, Frederic Y; Gayraud, Ghislaine
2015-01-01
Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.
Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence
Bois, Frederic Y.
2015-01-01
Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547
Anderson localization for radial tree-like random quantum graphs
NASA Astrophysics Data System (ADS)
Hislop, Peter D.; Post, Olaf
We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.
Adaptive random walks on the class of Web graphs
NASA Astrophysics Data System (ADS)
Tadić, B.
2001-09-01
We study random walk with adaptive move strategies on a class of directed graphs with variable wiring diagram. The graphs are grown from the evolution rules compatible with the dynamics of the world-wide Web [B. Tadić, Physica A 293, 273 (2001)], and are characterized by a pair of power-law distributions of out- and in-degree for each value of the parameter β, which measures the degree of rewiring in the graph. The walker adapts its move strategy according to locally available information both on out-degree of the visited node and in-degree of target node. A standard random walk, on the other hand, uses the out-degree only. We compute the distribution of connected subgraphs visited by an ensemble of walkers, the average access time and survival probability of the walks. We discuss these properties of the walk dynamics relative to the changes in the global graph structure when the control parameter β is varied. For β≥ 3, corresponding to the world-wide Web, the access time of the walk to a given level of hierarchy on the graph is much shorter compared to the standard random walk on the same graph. By reducing the amount of rewiring towards rigidity limit β↦βc≲ 0.1, corresponding to the range of naturally occurring biochemical networks, the survival probability of adaptive and standard random walk become increasingly similar. The adaptive random walk can be used as an efficient message-passing algorithm on this class of graphs for large degree of rewiring.
Efficient quantum pseudorandomness with simple graph states
NASA Astrophysics Data System (ADS)
Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian
2018-02-01
Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.
Scaling Limits and Generic Bounds for Exploration Processes
NASA Astrophysics Data System (ADS)
Bermolen, Paola; Jonckheere, Matthieu; Sanders, Jaron
2017-12-01
We consider exploration algorithms of the random sequential adsorption type both for homogeneous random graphs and random geometric graphs based on spatial Poisson processes. At each step, a vertex of the graph becomes active and its neighboring nodes become blocked. Given an initial number of vertices N growing to infinity, we study statistical properties of the proportion of explored (active or blocked) nodes in time using scaling limits. We obtain exact limits for homogeneous graphs and prove an explicit central limit theorem for the final proportion of active nodes, known as the jamming constant, through a diffusion approximation for the exploration process which can be described as a unidimensional process. We then focus on bounding the trajectories of such exploration processes on random geometric graphs, i.e., random sequential adsorption. As opposed to exploration processes on homogeneous random graphs, these do not allow for such a dimensional reduction. Instead we derive a fundamental relationship between the number of explored nodes and the discovered volume in the spatial process, and we obtain generic bounds for the fluid limit and jamming constant: bounds that are independent of the dimension of space and the detailed shape of the volume associated to the discovered node. Lastly, using coupling techinques, we give trajectorial interpretations of the generic bounds.
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
Evolution of a Modified Binomial Random Graph by Agglomeration
NASA Astrophysics Data System (ADS)
Kang, Mihyun; Pachon, Angelica; Rodríguez, Pablo M.
2018-02-01
In the classical Erdős-Rényi random graph G( n, p) there are n vertices and each of the possible edges is independently present with probability p. The random graph G( n, p) is homogeneous in the sense that all vertices have the same characteristics. On the other hand, numerous real-world networks are inhomogeneous in this respect. Such an inhomogeneity of vertices may influence the connection probability between pairs of vertices. The purpose of this paper is to propose a new inhomogeneous random graph model which is obtained in a constructive way from the Erdős-Rényi random graph G( n, p). Given a configuration of n vertices arranged in N subsets of vertices (we call each subset a super-vertex), we define a random graph with N super-vertices by letting two super-vertices be connected if and only if there is at least one edge between them in G( n, p). Our main result concerns the threshold for connectedness. We also analyze the phase transition for the emergence of the giant component and the degree distribution. Even though our model begins with G( n, p), it assumes the existence of some community structure encoded in the configuration. Furthermore, under certain conditions it exhibits a power law degree distribution. Both properties are important for real-world applications.
Entropy of spatial network ensembles
NASA Astrophysics Data System (ADS)
Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis
2018-04-01
We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.
Cross over of recurrence networks to random graphs and random geometric graphs
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2017-02-01
Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.
Localization on Quantum Graphs with Random Vertex Couplings
NASA Astrophysics Data System (ADS)
Klopp, Frédéric; Pankrashkin, Konstantin
2008-05-01
We consider Schrödinger operators on a class of periodic quantum graphs with randomly distributed Kirchhoff coupling constants at all vertices. We obtain necessary conditions for localization on quantum graphs in terms of finite volume criteria for some energy-dependent discrete Hamiltonians. These conditions hold in the strong disorder limit and at the spectral edges.
An Xdata Architecture for Federated Graph Models and Multi-tier Asymmetric Computing
2014-01-01
Wikipedia, a scale-free random graph (kron), Akamai trace route data, Bitcoin transaction data, and a Twitter follower network. We present results for...3x (SSSP on a random graph) and nearly 300x (Akamai and Bitcoin ) over the CPU performance of a well-known and widely deployed CPU-based graph...provided better throughput for smaller frontiers such as roadmaps or the Bitcoin data set. In our work, we have focused on two-phase kernels, but it
Synchronizability of random rectangular graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
Unimodular lattice triangulations as small-world and scale-free random graphs
NASA Astrophysics Data System (ADS)
Krüger, B.; Schmidt, E. M.; Mecke, K.
2015-02-01
Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.
Cascades in the Threshold Model for varying system sizes
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy
2015-03-01
A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.
Limits on relief through constrained exchange on random graphs
NASA Astrophysics Data System (ADS)
LaViolette, Randall A.; Ellebracht, Lory A.; Gieseler, Charles J.
2007-09-01
Agents are represented by nodes on a random graph (e.g., “small world”). Each agent is endowed with a zero-mean random value that may be either positive or negative. All agents attempt to find relief, i.e., to reduce the magnitude of that initial value, to zero if possible, through exchanges. The exchange occurs only between the agents that are linked, a constraint that turns out to dominate the results. The exchange process continues until Pareto equilibrium is achieved. Only 40-90% of the agents achieved relief on small-world graphs with mean degree between 2 and 40. Even fewer agents achieved relief on scale-free-like graphs with a truncated power-law degree distribution. The rate at which relief grew with increasing degree was slow, only at most logarithmic for all of the graphs considered; viewed in reverse, the fraction of nodes that achieve relief is resilient to the removal of links.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs
NASA Astrophysics Data System (ADS)
van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.
2018-04-01
Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.
Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang
2017-01-01
Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.
Random graph models of social networks.
Newman, M E J; Watts, D J; Strogatz, S H
2002-02-19
We describe some new exactly solvable models of the structure of social networks, based on random graphs with arbitrary degree distributions. We give models both for simple unipartite networks, such as acquaintance networks, and bipartite networks, such as affiliation networks. We compare the predictions of our models to data for a number of real-world social networks and find that in some cases, the models are in remarkable agreement with the data, whereas in others the agreement is poorer, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.
Phase transitions in Ising models on directed networks
NASA Astrophysics Data System (ADS)
Lipowski, Adam; Ferreira, António Luis; Lipowska, Dorota; Gontarek, Krzysztof
2015-11-01
We examine Ising models with heat-bath dynamics on directed networks. Our simulations show that Ising models on directed triangular and simple cubic lattices undergo a phase transition that most likely belongs to the Ising universality class. On the directed square lattice the model remains paramagnetic at any positive temperature as already reported in some previous studies. We also examine random directed graphs and show that contrary to undirected ones, percolation of directed bonds does not guarantee ferromagnetic ordering. Only above a certain threshold can a random directed graph support finite-temperature ferromagnetic ordering. Such behavior is found also for out-homogeneous random graphs, but in this case the analysis of magnetic and percolative properties can be done exactly. Directed random graphs also differ from undirected ones with respect to zero-temperature freezing. Only at low connectivity do they remain trapped in a disordered configuration. Above a certain threshold, however, the zero-temperature dynamics quickly drives the model toward a broken symmetry (magnetized) state. Only above this threshold, which is almost twice as large as the percolation threshold, do we expect the Ising model to have a positive critical temperature. With a very good accuracy, the behavior on directed random graphs is reproduced within a certain approximate scheme.
Cryptographic Boolean Functions with Biased Inputs
2015-07-31
theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the
Razban, Rostam M; Gilson, Amy I; Durfee, Niamh; Strobelt, Hendrik; Dinkla, Kasper; Choi, Jeong-Mo; Pfister, Hanspeter; Shakhnovich, Eugene I
2018-05-08
Protein evolution spans time scales and its effects span the length of an organism. A web app named ProteomeVis is developed to provide a comprehensive view of protein evolution in the S. cerevisiae and E. coli proteomes. ProteomeVis interactively creates protein chain graphs, where edges between nodes represent structure and sequence similarities within user-defined ranges, to study the long time scale effects of protein structure evolution. The short time scale effects of protein sequence evolution are studied by sequence evolutionary rate (ER) correlation analyses with protein properties that span from the molecular to the organismal level. We demonstrate the utility and versatility of ProteomeVis by investigating the distribution of edges per node in organismal protein chain universe graphs (oPCUGs) and putative ER determinants. S. cerevisiae and E. coli oPCUGs are scale-free with scaling constants of 1.79 and 1.56, respectively. Both scaling constants can be explained by a previously reported theoretical model describing protein structure evolution (Dokholyan et al., 2002). Protein abundance most strongly correlates with ER among properties in ProteomeVis, with Spearman correlations of -0.49 (p-value<10-10) and -0.46 (p-value<10-10) for S. cerevisiae and E. coli, respectively. This result is consistent with previous reports that found protein expression to be the most important ER determinant (Zhang and Yang, 2015). ProteomeVis is freely accessible at http://proteomevis.chem.harvard.edu. Supplementary data are available at Bioinformatics. shakhnovich@chemistry.harvard.edu.
Offdiagonal complexity: A computationally quick complexity measure for graphs and networks
NASA Astrophysics Data System (ADS)
Claussen, Jens Christian
2007-02-01
A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Quantum walks on the chimera graph and its variants
NASA Astrophysics Data System (ADS)
Sanders, Barry; Sun, Xiangxiang; Xu, Shu; Wu, Jizhou; Zhang, Wei-Wei; Arshed, Nigum
We study quantum walks on the chimera graph, which is an important graph for performing quantum annealing, and we explore the nature of quantum walks on variants of the chimera graph. Features of these quantum walks provide profound insights into the nature of the chimera graph, including effects of greater and lesser connectivity, strong differences between quantum and classical random walks, isotropic spreading and localization only in the quantum case, and random graphs. We analyze finite-size effects due to limited width and length of the graph, and we explore the effect of different boundary conditions such as periodic and reflecting. Effects are explained via spectral analysis and the properties of stationary states, and spectral analysis enables us to characterize asymptotic behavior of the quantum walker in the long-time limit. Supported by China 1000 Talent Plan, National Science Foundation of China, Hefei National Laboratory for Physical Sciences at Microscale Fellowship, and the Chinese Academy of Sciences President's International Fellowship Initiative.
Approximate ground states of the random-field Potts model from graph cuts
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay
2018-05-01
While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.
A Wave Chaotic Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan
Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.
Graph theoretical model of a sensorimotor connectome in zebrafish.
Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan
2012-01-01
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
Analysis of Social Network Measures with Respect to Structural Properties of Networks
2012-03-01
there has been increased interest in degree based generators. The three generators that this thesis is interested in are the Erdos- Renyi (ER...these generators has their pros and cons. The ER graph generator was developed in 1960 by Erdos and Renyi in hopes of producing networks that...0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P e rc e n ta ge Average Clustering Coefficient Erdös- Renyi BA (2 edge) BA (5 edge) BA (10 edge) PNDCG (α=2.35
Disentangling giant component and finite cluster contributions in sparse random matrix spectra.
Kühn, Reimer
2016-04-01
We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
NASA Astrophysics Data System (ADS)
Tower, M. M.; Haight, C. H.
1984-03-01
The development status of a single-pulse distributed-energy-source electromagnetic railgun (ER) based on the design of Tower (1982) is reviewed. The five-stage ER is 3.65 m long, with energy inputs every 30 cm starting at the breech and a 12.7-mm-square bore cross section, and is powered by a 660-kJ 6-kV modular capacitor bank. Lexan cubes weighing 2.5 grams have been accelerated to velocities up to 8.5 km/sec at 500 kA and conversion efficiency up to 20 percent. Design goal for a 20-mm-sq-cross-section ER is acceleration of a 60-g projectile to 3-4 km/sec at 35-percent efficiency. Drawings, photographs, and graphs of performance are provided.
Spectral fluctuations of quantum graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhař, Z.; Weidenmüller, H. A.
We prove the Bohigas-Giannoni-Schmit conjecture in its most general form for completely connected simple graphs with incommensurate bond lengths. We show that for graphs that are classically mixing (i.e., graphs for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap), the generating functions for all (P,Q) correlation functions for both closed and open graphs coincide (in the limit of infinite graph size) with the corresponding expressions of random-matrix theory, both for orthogonal and for unitary symmetry.
A simple rule for the evolution of cooperation on graphs and social networks.
Ohtsuki, Hisashi; Hauert, Christoph; Lieberman, Erez; Nowak, Martin A
2006-05-25
A fundamental aspect of all biological systems is cooperation. Cooperative interactions are required for many levels of biological organization ranging from single cells to groups of animals. Human society is based to a large extent on mechanisms that promote cooperation. It is well known that in unstructured populations, natural selection favours defectors over cooperators. There is much current interest, however, in studying evolutionary games in structured populations and on graphs. These efforts recognize the fact that who-meets-whom is not random, but determined by spatial relationships or social networks. Here we describe a surprisingly simple rule that is a good approximation for all graphs that we have analysed, including cycles, spatial lattices, random regular graphs, random graphs and scale-free networks: natural selection favours cooperation, if the benefit of the altruistic act, b, divided by the cost, c, exceeds the average number of neighbours, k, which means b/c > k. In this case, cooperation can evolve as a consequence of 'social viscosity' even in the absence of reputation effects or strategic complexity.
Quantifying randomness in real networks
NASA Astrophysics Data System (ADS)
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
Performance Analysis of AN Engine Mount Featuring ER Fluids and Piezoactuators
NASA Astrophysics Data System (ADS)
Choi, S. H.; Choi, Y. T.; Choi, S. B.; Cheong, C. C.
Conventional rubber mounts and various types of passive or semi-active hydraulic engine mounts for a passenger vehicle have their own functional aims on the limited frequency band in the broad engine operating frequency range. In order to achieve high system performance over all frequency ranges of the engine operation, a new type of engine mount featuring electro-rheological(ER) fluids and piezoactuators is proposed in this study. A mathematical model of the proposed engine mount is derived using the bond graph method which is inherently adequate to model the interconnected hydromechanical system. In the low frequency domain, the ER fluid is activated upon imposing an electric field for vibration isolation while the piezoactuator is activated in the high frequency domain. A neuro-control algorithm is utilized to determine control electric field for the ER fluid, and H∞ control technique is adopted for the piezoactuator Comparative works between the proposed and single-actuating(ER fluid only or piezoactuator only) engine mounts are undertaken by evaluating force transmissibility over a wide operating frequency range.
NASA Astrophysics Data System (ADS)
Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele
2016-07-01
This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.
Geographic Gossip: Efficient Averaging for Sensor Networks
NASA Astrophysics Data System (ADS)
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
A Graph Theory Practice on Transformed Image: A Random Image Steganography
Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan
2013-01-01
Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857
Rotundo, Roberto; Nieri, Michele; Cairo, Francesco; Franceschi, Debora; Mervelt, Jana; Bonaccini, Daniele; Esposito, Marco; Pini-Prato, Giovanpaolo
2010-06-01
This split-mouth, randomized, clinical trial aimed to evaluate the efficacy of erbium-doped:yttrium-aluminium-garnet (Er:YAG) laser application in non-surgical periodontal treatment. A total of 27 patients underwent four modalities of non-surgical therapy: supragingival debridement; scaling and root planing (SRP)+Er:YAG laser; Er:YAG laser; and SRP. Each strategy was randomly assigned and performed in one of the four quadrants. Clinical outcomes were evaluated at 3 and 6 months. Subjective benefits of patients have been evaluated by means of questionnaires. Six months after therapy, Er:YAG laser showed no statistical difference in clinical attachment gain with respect to supragingival scaling [0.15 mm (95% CI -0.16; 0.46)], while SRP showed a greater attachment gain than the supragingival scaling [0.37 mm (95% CI 0.05; 0.68)]. No difference resulted between Er:YAG laser+SRP and SRP alone [0.05 mm (95% CI -0.25; 0.36)]. The adjunctive use of Er:YAG laser to conventional SRP did not reveal a more effective result than SRP alone. Furthermore, the sites treated with Er:YAG laser showed similar results of the sites treated with supragingival scaling.
Consistent latent position estimation and vertex classification for random dot product graphs.
Sussman, Daniel L; Tang, Minh; Priebe, Carey E
2014-01-01
In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph derived from Wikipedia.
Emergence of a spectral gap in a class of random matrices associated with split graphs
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Zia, R. K. P.
2018-01-01
Motivated by the intriguing behavior displayed in a dynamic network that models a population of extreme introverts and extroverts (XIE), we consider the spectral properties of ensembles of random split graph adjacency matrices. We discover that, in general, a gap emerges in the bulk spectrum between -1 and 0 that contains a single eigenvalue. An analytic expression for the bulk distribution is derived and verified with numerical analysis. We also examine their relation to chiral ensembles, which are associated with bipartite graphs.
Evolutionary Games of Multiplayer Cooperation on Graphs
Arranz, Jordi; Traulsen, Arne
2016-01-01
There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946
Cooperation in the noisy case: Prisoner's dilemma game on two types of regular random graphs
NASA Astrophysics Data System (ADS)
Vukov, Jeromos; Szabó, György; Szolnoki, Attila
2006-06-01
We have studied an evolutionary prisoner’s dilemma game with players located on two types of random regular graphs with a degree of 4. The analysis is focused on the effects of payoffs and noise (temperature) on the maintenance of cooperation. When varying the noise level and/or the highest payoff, the system exhibits a second-order phase transition from a mixed state of cooperators and defectors to an absorbing state where only defectors remain alive. For the random regular graph (and Bethe lattice) the behavior of the system is similar to those found previously on the square lattice with nearest neighbor interactions, although the measure of cooperation is enhanced by the absence of loops in the connectivity structure. For low noise the optimal connectivity structure is built up from randomly connected triangles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Exact and approximate graph matching using random walks.
Gori, Marco; Maggini, Marco; Sarti, Lorenzo
2005-07-01
In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.
Existence of the Harmonic Measure for Random Walks on Graphs and in Random Environments
NASA Astrophysics Data System (ADS)
Boivin, Daniel; Rau, Clément
2013-01-01
We give a sufficient condition for the existence of the harmonic measure from infinity of transient random walks on weighted graphs. In particular, this condition is verified by the random conductance model on ℤ d , d≥3, when the conductances are i.i.d. and the bonds with positive conductance percolate. The harmonic measure from infinity also exists for random walks on supercritical clusters of ℤ2. This is proved using results of Barlow (Ann. Probab. 32:3024-3084, 2004) and Barlow and Hambly (Electron. J. Probab. 14(1):1-27, 2009).
Interacting particle systems on graphs
NASA Astrophysics Data System (ADS)
Sood, Vishal
In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations, while for small populations the dynamics are similar to the neutral case. The likelihood for the fitter mutants to drive the resident genotype to extinction is calculated.
2010-11-30
Erdos- Renyi -Gilbert random graph [Erdos and Renyi , 1959; Gilbert, 1959], the Watts-Strogatz “small world” framework [Watts and Strogatz, 1998], and the...2003). Evolution of Networks. Oxford University Press, USA. Erdos, P. and Renyi , A. (1959). On Random Graphs. Publications Mathematicae, 6 290–297
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
All optical mode controllable Er-doped random fiber laser with distributed Bragg gratings.
Zhang, W L; Ma, R; Tang, C H; Rao, Y J; Zeng, X P; Yang, Z J; Wang, Z N; Gong, Y; Wang, Y S
2015-07-01
An all-optical method to control the lasing modes of Er-doped random fiber lasers (RFLs) is proposed and demonstrated. In the RFL, an Er-doped fiber (EDF) recoded with randomly separated fiber Bragg gratings (FBG) is used as the gain medium and randomly distributed reflectors, as well as the controllable element. By combining random feedback of the FBG array and Fresnel feedback of a cleaved fiber end, multi-mode coherent random lasing is obtained with a threshold of 14 mW and power efficiency of 14.4%. Moreover, a laterally-injected control light is used to induce local gain perturbation, providing additional gain for certain random resonance modes. As a result, active mode selection of the RFL is realized by changing locations of the laser cavity that is exposed to the control light.
Multi-INT Complex Event Processing using Approximate, Incremental Graph Pattern Search
2012-06-01
graph pattern search and SPARQL queries . Total execution time for 10 executions each of 5 random pattern searches in synthetic data sets...01/11 1000 10000 100000 RDF triples Time (secs) 10 20 Graph pattern algorithm SPARQL queries Initial Performance Comparisons 09/18/11 2011 Thrust Area
Diaconis, Persi; Holmes, Susan; Janson, Svante
2015-01-01
We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368
Coloring geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Percus, Allon; Muller, Tobias
We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less
Statistically significant relational data mining :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less
Spatial Search by Quantum Walk is Optimal for Almost all Graphs.
Chakraborty, Shantanav; Novo, Leonardo; Ambainis, Andris; Omar, Yasser
2016-03-11
The problem of finding a marked node in a graph can be solved by the spatial search algorithm based on continuous-time quantum walks (CTQW). However, this algorithm is known to run in optimal time only for a handful of graphs. In this work, we prove that for Erdös-Renyi random graphs, i.e., graphs of n vertices where each edge exists with probability p, search by CTQW is almost surely optimal as long as p≥log^{3/2}(n)/n. Consequently, we show that quantum spatial search is in fact optimal for almost all graphs, meaning that the fraction of graphs of n vertices for which this optimality holds tends to one in the asymptotic limit. We obtain this result by proving that search is optimal on graphs where the ratio between the second largest and the largest eigenvalue is bounded by a constant smaller than 1. Finally, we show that we can extend our results on search to establish high fidelity quantum communication between two arbitrary nodes of a random network of interacting qubits, namely, to perform quantum state transfer, as well as entanglement generation. Our work shows that quantum information tasks typically designed for structured systems retain performance in very disordered structures.
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
An Analytical Framework for Fast Estimation of Capacity and Performance in Communication Networks
2012-01-25
standard random graph (due to Erdos- Renyi ) in the regime where the average degrees remain fixed (and above 1) and the number of nodes get large, is not...abs/1010.3305 (Oct 2010). [6] O. Narayan, I. Saniee, G. H. Tucci, “Lack of Spectral Gap and Hyperbolicity in Asymptotic Erdös- Renyi Random Graphs
Evolution of tag-based cooperation on Erdős-Rényi random graphs
NASA Astrophysics Data System (ADS)
Lima, F. W. S.; Hadzibeganovic, Tarik; Stauffer, Dietrich
2014-12-01
Here, we study an agent-based model of the evolution of tag-mediated cooperation on Erdős-Rényi random graphs. In our model, agents with heritable phenotypic traits play pairwise Prisoner's Dilemma-like games and follow one of the four possible strategies: Ethnocentric, altruistic, egoistic and cosmopolitan. Ethnocentric and cosmopolitan strategies are conditional, i.e. their selection depends upon the shared phenotypic similarity among interacting agents. The remaining two strategies are always unconditional, meaning that egoists always defect while altruists always cooperate. Our simulations revealed that ethnocentrism can win in both early and later evolutionary stages on directed random graphs when reproduction of artificial agents was asexual; however, under the sexual mode of reproduction on a directed random graph, we found that altruists dominate initially for a rather short period of time, whereas ethnocentrics and egoists suppress other strategists and compete for dominance in the intermediate and later evolutionary stages. Among our results, we also find surprisingly regular oscillations which are not damped in the course of time even after half a million Monte Carlo steps. Unlike most previous studies, our findings highlight conditions under which ethnocentrism is less stable or suppressed by other competing strategies.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Huffmann, Master; Siegel, Edward Carl-Ludwig
2013-03-01
Newcomb-Benford(NeWBe)-Siegel log-law BEC Digit-Physics Network/Graph-Physics Barabasi et.al. evolving-``complex''-networks/graphs BEC JAMMING DOA attacks: Amazon(weekends: Microsoft I.E.-7/8(vs. Firefox): Memorial-day, Labor-day,...), MANY U.S.-Banks:WF,BoA,UB,UBS,...instantiations AGAIN militate for MANDATORY CONVERSION to PARALLEL ANALOG FAULT-TOLERANT but slow(er) SECURITY-ASSURANCE networks/graphs in parallel with faster ``sexy'' DIGITAL-Networks/graphs:``Cloud'', telecomm: n-G,..., because of common ACHILLES-HEEL VULNERABILITY: DIGITS!!! ``In fast-hare versus slow-tortoise race, Slow-But-Steady ALWAYS WINS!!!'' (Zeno). {Euler [#s(1732)] ∑- ∏()-Riemann[Monats. Akad. Berlin (1859)] ∑- ∏()- Kummer-Bernoulli (#s)}-Newcomb [Am.J.Math.4(1),39 (81) discovery of the QUANTUM!!!]-{Planck (01)]}-{Einstein (05)]-Poincar e [Calcul Probabilités,313(12)]-Weyl[Goett. Nach.(14); Math.Ann.77,313(16)]-(Bose (24)-Einstein(25)]-VS. -Fermi (27)-Dirac(27))-Menger [Dimensiontheorie(29)]-Benford [J.Am. Phil.Soc.78,115(38)]-Kac[Maths Stats.-Reason. (55)]- Raimi [Sci.Am.221,109(69)]-Jech-Hill [Proc.AMS,123,3,887(95)] log-function
Graph Kernels for Molecular Similarity.
Rupp, Matthias; Schneider, Gisbert
2010-04-12
Molecular similarity measures are important for many cheminformatics applications like ligand-based virtual screening and quantitative structure-property relationships. Graph kernels are formal similarity measures defined directly on graphs, such as the (annotated) molecular structure graph. Graph kernels are positive semi-definite functions, i.e., they correspond to inner products. This property makes them suitable for use with kernel-based machine learning algorithms such as support vector machines and Gaussian processes. We review the major types of kernels between graphs (based on random walks, subgraphs, and optimal assignments, respectively), and discuss their advantages, limitations, and successful applications in cheminformatics. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Information Selection in Intelligence Processing
2011-12-01
given. Edges connecting nodes representing irrelevant persons with either relevant or irrelevant persons are added randomly, as in an Erdos- Renyi ...graph (Erdos at Renyi , 1959): For each irrelevant node i , and another node j (either relevant or irrelevant) there is a predetermined probability that...statistics for engineering and the sciences (7th ed.). Boston: Duxbury Press. Erdos, P., & Renyi , A. (1959). “On Random Graphs,” Publicationes
A characterization of horizontal visibility graphs and combinatorics on words
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Mansour, Toufik; Severini, Simone
2011-06-01
A Horizontal Visibility Graph (HVG) is defined in association with an ordered set of non-negative reals. HVGs realize a methodology in the analysis of time series, their degree distribution being a good discriminator between randomness and chaos Luque et al. [B. Luque, L. Lacasa, F. Ballesteros, J. Luque, Horizontal visibility graphs: exact results for random time series, Phys. Rev. E 80 (2009), 046103]. We prove that a graph is an HVG if and only if it is outerplanar and has a Hamilton path. Therefore, an HVG is a noncrossing graph, as defined in algebraic combinatorics Flajolet and Noy [P. Flajolet, M. Noy, Analytic combinatorics of noncrossing configurations, Discrete Math., 204 (1999) 203-229]. Our characterization of HVGs implies a linear time recognition algorithm. Treating ordered sets as words, we characterize subfamilies of HVGs highlighting various connections with combinatorial statistics and introducing the notion of a visible pair. With this technique, we determine asymptotically the average number of edges of HVGs.
Quantum walk on a chimera graph
NASA Astrophysics Data System (ADS)
Xu, Shu; Sun, Xiangxiang; Wu, Jizhou; Zhang, Wei-Wei; Arshed, Nigum; Sanders, Barry C.
2018-05-01
We analyse a continuous-time quantum walk on a chimera graph, which is a graph of choice for designing quantum annealers, and we discover beautiful quantum walk features such as localization that starkly distinguishes classical from quantum behaviour. Motivated by technological thrusts, we study continuous-time quantum walk on enhanced variants of the chimera graph and on diminished chimera graph with a random removal of vertices. We explain the quantum walk by constructing a generating set for a suitable subgroup of graph isomorphisms and corresponding symmetry operators that commute with the quantum walk Hamiltonian; the Hamiltonian and these symmetry operators provide a complete set of labels for the spectrum and the stationary states. Our quantum walk characterization of the chimera graph and its variants yields valuable insights into graphs used for designing quantum-annealers.
Matched signal detection on graphs: Theory and application to brain imaging data classification.
Hu, Chenhui; Sepulcre, Jorge; Johnson, Keith A; Fakhri, Georges E; Lu, Yue M; Li, Quanzheng
2016-01-15
Motivated by recent progress in signal processing on graphs, we have developed a matched signal detection (MSD) theory for signals with intrinsic structures described by weighted graphs. First, we regard graph Laplacian eigenvalues as frequencies of graph-signals and assume that the signal is in a subspace spanned by the first few graph Laplacian eigenvectors associated with lower eigenvalues. The conventional matched subspace detector can be applied to this case. Furthermore, we study signals that may not merely live in a subspace. Concretely, we consider signals with bounded variation on graphs and more general signals that are randomly drawn from a prior distribution. For bounded variation signals, the test is a weighted energy detector. For the random signals, the test statistic is the difference of signal variations on associated graphs, if a degenerate Gaussian distribution specified by the graph Laplacian is adopted. We evaluate the effectiveness of the MSD on graphs both with simulated and real data sets. Specifically, we apply MSD to the brain imaging data classification problem of Alzheimer's disease (AD) based on two independent data sets: 1) positron emission tomography data with Pittsburgh compound-B tracer of 30 AD and 40 normal control (NC) subjects, and 2) resting-state functional magnetic resonance imaging (R-fMRI) data of 30 early mild cognitive impairment and 20 NC subjects. Our results demonstrate that the MSD approach is able to outperform the traditional methods and help detect AD at an early stage, probably due to the success of exploiting the manifold structure of the data. Copyright © 2015. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
Distribution of diameters for Erdős-Rényi random graphs.
Hartmann, A K; Mézard, M
2018-03-01
We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.
Distribution of diameters for Erdős-Rényi random graphs
NASA Astrophysics Data System (ADS)
Hartmann, A. K.; Mézard, M.
2018-03-01
We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.
Nomura, Y; Tashiro, H; Hisamatsu, K; Shinozuka, K
1988-06-01
Based on estrogen receptor (ER) status and menopausal status, operable breast cancer (International Union Against Cancer [UICC] Stage I, II, and III) patients were randomized for adjuvant endocrine therapy, chemotherapy, and chemoendocrine therapy, and the effects on the disease-free survival (DFS) and overall survival (OS) were compared. Adjuvant endocrine therapy was composed of tamoxifen (TAM) 20 mg/day orally for 2 years in postmenopausal patients. In premenopausal patients, oophorectomy (OVEX) was done before TAM administration. In the chemotherapy arm, the patients were given 0.06 mg/kg of body weight of mitomycin C (MMC) intravenously (IV) and then an oral administration of cyclophosphamide (CPA) 100 mg/body orally in an administration of a 3-month period and a 3-month intermission. This 6-month schedule was repeated four times in 2 years. As the chemoendocrine therapy arm, TAM with MMC + CPA chemotherapy was added. The patients were randomized according to ER and menopausal status. Estrogen receptor-positive (ER+) cancer patients were randomized to three arms: TAM +/- OVEX, MMC + CPA, or MMC + CPA + TAM. For estrogen receptor-negative (ER-) patients, there were two arms: MMC + CPA, or MMC + TAM. The study started in September 1978, and 692 patients entered until the end of 1984 were evaluated. The median follow-up was about 46 months. Totally, a 9.8% rate (68/692) of recurrence was noted, a 7.5% rate (52/692) of mortality. There were no significant differences in DFS or OS among the treatment arms in ER+ or ER- patients. There was significant differences in adverse effects such as bone marrow suppression, gastrointestinal disturbances, cystitis, hair loss between endocrine therapy and chemotherapy or chemoendocrine therapy groups. In this preliminary study, it was concluded that because of less adverse effects of endocrine therapy, it seems rational to select the operable breast cancer patients by the presence or absence of ER, namely, endocrine therapy for ER+ and chemotherapy for ER- cancer patients.
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
NASA Technical Reports Server (NTRS)
Moore, R.; McClelen, C. E.
1985-01-01
In calyptrogen cells of Zea mays, proplastids are distributed randomly throughout the cell, and the endoplasmic reticulum (ER) is distributed parallel to the cell walls. The differentiation of calyptrogen cells into columella statocytes is characterized by the following sequential events: (1) formation of ER complexes at the distal and proximal ends of the cell, (2) differentiation of proplastids into amyloplasts, (3) sedimentation of amyloplasts onto the distal ER complex, (4) breakdown of the distal ER complex and sedimentation of amyloplasts to the bottom of the cell, and (5) formation of sheets of ER parallel to the longitudinal cell walls. Columella statocytes located in the centre of the cap each possess 4530 +/- 780 micrometers2 of ER surface area, an increase of 670 per cent over that of calyptrogen cells. The differentiation of peripheral cells correlates positively with (1) the ER becoming arranged in concentric sheets, (2) amyloplasts and ER becoming randomly distributed, and (3) a 280 per cent increase in ER surface area over that of columella statocytes. These results are discussed relative to graviperception and mucilage secretion, which are functions of columella and peripheral cells, respectively.
Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen E.; Humble, Travis S.
2017-04-01
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. In an effort to reduce the complexity of the minor embedding problem, we introduce the minor set cover (MSC) of a known graph G: a subset of graph minors which contain any remaining minor of the graph as a subgraph. Any graph that can be embedded into G will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, which is a complete bipartite graph. We show that the complete bipartite graph K_{N,N} has a MSC of N minors, from which K_{N+1} is identified as the largest clique minor of K_{N,N}. The case of determining the largest clique minor of hardware with faults is briefly discussed but remains an open question.
Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets
Hamilton, Kathleen E.; Humble, Travis S.
2017-02-23
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less
graphkernels: R and Python packages for graph comparison
Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-01-01
Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902
Detecting labor using graph theory on connectivity matrices of uterine EMG.
Al-Omar, S; Diab, A; Nader, N; Khalil, M; Karlsson, B; Marque, C
2015-08-01
Premature labor is one of the most serious health problems in the developed world. One of the main reasons for this is that no good way exists to distinguish true labor from normal pregnancy contractions. The aim of this paper is to investigate if the application of graph theory techniques to multi-electrode uterine EMG signals can improve the discrimination between pregnancy contractions and labor. To test our methods we first applied them to synthetic graphs where we detected some differences in the parameters results and changes in the graph model from pregnancy-like graphs to labor-like graphs. Then, we applied the same methods to real signals. We obtained the best differentiation between pregnancy and labor through the same parameters. Major improvements in differentiating between pregnancy and labor were obtained using a low pass windowing preprocessing step. Results show that real graphs generally became more organized when moving from pregnancy, where the graph showed random characteristics, to labor where the graph became a more small-world like graph.
graphkernels: R and Python packages for graph comparison.
Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-02-01
Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or themore » giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.« less
Scale-free characteristics of random networks: the topology of the world-wide web
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Albert, Réka; Jeong, Hawoong
2000-06-01
The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
Small-world bias of correlation networks: From brain to climate
NASA Astrophysics Data System (ADS)
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
Small-world bias of correlation networks: From brain to climate.
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
Visibility graphs of random scalar fields and spatial data
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Iacovacci, Jacopo
2017-07-01
We extend the family of visibility algorithms to map scalar fields of arbitrary dimension into graphs, enabling the analysis of spatially extended data structures as networks. We introduce several possible extensions and provide analytical results on the topological properties of the graphs associated to different types of real-valued matrices, which can be understood as the high and low disorder limits of real-valued scalar fields. In particular, we find a closed expression for the degree distribution of these graphs associated to uncorrelated random fields of generic dimension. This result holds independently of the field's marginal distribution and it directly yields a statistical randomness test, applicable in any dimension. We showcase its usefulness by discriminating spatial snapshots of two-dimensional white noise from snapshots of a two-dimensional lattice of diffusively coupled chaotic maps, a system that generates high dimensional spatiotemporal chaos. The range of potential applications of this combinatorial framework includes image processing in engineering, the description of surface growth in material science, soft matter or medicine, and the characterization of potential energy surfaces in chemistry, disordered systems, and high energy physics. An illustration on the applicability of this method for the classification of the different stages involved in carcinogenesis is briefly discussed.
Optimal Quantum Spatial Search on Random Temporal Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser
2017-12-01
To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.
ERIC Educational Resources Information Center
Marston, Doug; Deno, Stanley L.
The accuracy of predictions of future student performance on the basis of graphing data on semi-logarithmic charts and equal interval graphs was examined. All 83 low-achieving students in grades 3 to 6 read randomly-selected lists of words from the Harris-Jacobson Word List for 1 minute. The number of words read correctly and words read…
The many faces of graph dynamics
NASA Astrophysics Data System (ADS)
Pignolet, Yvonne Anne; Roy, Matthieu; Schmid, Stefan; Tredan, Gilles
2017-06-01
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a ‘one fits it all’ model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
Growth and structure of the World Wide Web: Towards realistic modeling
NASA Astrophysics Data System (ADS)
Tadić, Bosiljka
2002-08-01
We simulate evolution of the World Wide Web from the dynamic rules incorporating growth, bias attachment, and rewiring. We show that the emergent double-hierarchical structure with distinct distributions of out- and in-links is comparable with the observed empirical data when the control parameter (average graph flexibility β) is kept in the range β=3-4. We then explore the Web graph by simulating (a) Web crawling to determine size and depth of connected components, and (b) a random walker that discovers the structure of connected subgraphs with dominant attractor and promoter nodes. A random walker that adapts its move strategy to mimic local node linking preferences is shown to have a short access time to "important" nodes on the Web graph.
Exactly solvable random graph ensemble with extensively many short cycles
NASA Astrophysics Data System (ADS)
Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.
2018-02-01
We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.
Quasirandom geometric networks from low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2017-08-01
We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.
NASA Astrophysics Data System (ADS)
Wu, Ang-Kun; Tian, Liang; Liu, Yang-Yu
2018-01-01
A bridge in a graph is an edge whose removal disconnects the graph and increases the number of connected components. We calculate the fraction of bridges in a wide range of real-world networks and their randomized counterparts. We find that real networks typically have more bridges than their completely randomized counterparts, but they have a fraction of bridges that is very similar to their degree-preserving randomizations. We define an edge centrality measure, called bridgeness, to quantify the importance of a bridge in damaging a network. We find that certain real networks have a very large average and variance of bridgeness compared to their degree-preserving randomizations and other real networks. Finally, we offer an analytical framework to calculate the bridge fraction and the average and variance of bridgeness for uncorrelated random networks with arbitrary degree distributions.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.
Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102
Antiferromagnetic Potts Model on the Erdős-Rényi Random Graph
NASA Astrophysics Data System (ADS)
Contucci, Pierluigi; Dommers, Sander; Giardinà, Cristian; Starr, Shannon
2013-10-01
We study the antiferromagnetic Potts model on the Poissonian Erdős-Rényi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.
Finding paths in tree graphs with a quantum walk
NASA Astrophysics Data System (ADS)
Koch, Daniel; Hillery, Mark
2018-01-01
We analyze the potential for different types of searches using the formalism of scattering random walks on quantum computers. Given a particular type of graph consisting of nodes and connections, a "tree maze," we would like to find a selected final node as quickly as possible, faster than any classical search algorithm. We show that this can be done using a quantum random walk, both through numerical calculations as well as by using the eigenvectors and eigenvalues of the quantum system.
Xu, Xue-song; Zhu, Ya-qin
2015-12-01
To evaluate the influence of different root canal instrumentation size on disinfection of intracanal microbe of dental root canal. 368 extracted human anterior teeth with single straight root were randomly divided into 8 groups of 46 roots in each. They were instrumented with K3 Ni-Ti files as follows: group A1 and group B1(#25/0.06), group A2 and group B2(#30/0.06), group A3 and group B3(#35/0.06), group A4 and group B4(#40/0.06). After being prepared and sterilized by autoclaving, group A was inoculated with Enterococcus faecalis, and group B was inoculated with Candida albicans. All groups were irrigated with Er:YAG laser combination of 3% NaOCl, 17% EDTA and 0.9% saline, and then the numbers of microbe on the surface of root canal walls were counted after the treatment, the absolute reduction of counting colony forming units(CFUs) and the relative residual rate of CFUs in the individual group was determined. The Date was analyzed with GraphPad Prism 5.02 software package by one-way analysis of variance. Levels of disinfection on E.faecalis and C.albicans increased when root canals were enlarged; #40/0.06 showed the best disinfection, #35/0.06 showed a significantly better disinfection than #30/0.06 and #25/0.06. Substantial reduction of microbe was obtained in #35/0.06 and #40/0.06 compared with #25/0.06 and #30/0.06(P<0.05). Within the root canal size of #25/0.06-#40/0.06, under the conditions of Er:YAG laser combination of 3% NaOCl, 17% EDTA and 0.9% saline, it was concluded that the reduction of E.faecalis and C.albicans of the anterior straight root canals could be predicted by increasing the root canal instrumentation size large than #30/0.06.
Convergence of the Graph Allen-Cahn Scheme
NASA Astrophysics Data System (ADS)
Luo, Xiyang; Bertozzi, Andrea L.
2017-05-01
The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.
Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.
Shang, Yilun
2015-01-01
Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices.
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
Melchert, O; Katzgraber, Helmut G; Novotny, M A
2016-04-01
We estimate the critical thresholds of bond and site percolation on nonplanar, effectively two-dimensional graphs with chimeralike topology. The building blocks of these graphs are complete and symmetric bipartite subgraphs of size 2n, referred to as K_{n,n} graphs. For the numerical simulations we use an efficient union-find-based algorithm and employ a finite-size scaling analysis to obtain the critical properties for both bond and site percolation. We report the respective percolation thresholds for different sizes of the bipartite subgraph and verify that the associated universality class is that of standard two-dimensional percolation. For the canonical chimera graph used in the D-Wave Systems Inc. quantum annealer (n=4), we discuss device failure in terms of network vulnerability, i.e., we determine the critical fraction of qubits and couplers that can be absent due to random failures prior to losing large-scale connectivity throughout the device.
2012-01-01
Background To assess the impact of luteal phase support on the expression of estrogen receptor (ER) alpha and progesterone receptors B (PR-B) on the endometrium of oocyte donors undergoing controlled ovarian hyperstimulation (COH). Methods A prospective, randomized study was conducted in women undergoing controlled ovarian hyperstimulation for oocyte donation. Participants were randomized to receive no luteal support, vaginal progesterone alone, or vaginal progesterone plus orally administered 17 Beta estradiol. Endometrial biopsies were obtained at 4 time points in the luteal phase and evaluated by tissue microarray for expression of ER alpha and PR-B. Results One-hundred and eight endometrial tissue samples were obtained from 12 patients. No differences were found in expression of ER alpha and PR-B among all the specimens with the exception of one sample value. Conclusions The administration of progesterone during the luteal phase of COH for oocyte donor cycles, either with or without estrogen, does not significantly affect the endometrial expression of ER alpha and PR. PMID:22360924
Listing triangles in expected linear time on a class of power law graphs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann
Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less
Automatic Molecular Design using Evolutionary Techniques
NASA Technical Reports Server (NTRS)
Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)
1998-01-01
Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.
Scale-free Graphs for General Aviation Flight Schedules
NASA Technical Reports Server (NTRS)
Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.
2003-01-01
In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Laurora, Irene; Wang, Yuan
2016-10-01
Extended-release (ER) naproxen sodium provides pain relief for up to 24 hours with a single dose (660 mg/day). Its pharmacokinetic profile after single and multiple dosing was compared to immediate release (IR) naproxen sodium in two randomized, open-label, crossover studies, under fasting and fed conditions. Eligible healthy subjects were randomized to ER naproxen sodium 660-mg tablet once daily or IR naproxen sodium 220-mg tablet twice daily (440 mg initially, followed by 220 mg 12 hours later). Primary variables: pharmacokinetic parameters after singleday administration (day 1) and at steady state after multiple-day administration (day 6). Total exposure was comparable for both treatments under fasting and fed conditions. After fasting: peak naproxen concentrations were slightly lower with ER naproxen sodium than with IR naproxen sodium but were reached at a similar time. Fed conditions: mean peak concentrations were comparable but reached after a longer time with ER vs. IR naproxen sodium. ER naproxen sodium was well tolerated, with a similar safety profile to IR naproxen sodium. The total exposure of ER naproxen sodium (660 mg) is comparable to IR naproxen sodium (220 mg) when administered at the maximum over the counter (OTC) dose of 660-mg daily dose on a single day and over multiple days. The rate of absorption is delayed under fed conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Naixing; Qi Ping
1992-06-01
In this paper the absorption spectra of 4f electron transitions of the systems of neodymium and erbium with 8-hydroxyquinoline-5-sulphonic acid and diethylamine have been studied by normal and third-derivative spectrophotometry. Their molar absorptivities are 80 1.mol{sup {minus}1}.cm{sup {minus}1} for neodymium and 65 1.mol{sup {minus}1}.cm{sup {minus}1} for erbium. Use of the third-derivative spectra, eliminates the interference by other rare earths and increases the sensitivity for Nd and Er. The derivative molar absorptivities are 390 1.mol{sup {minus}1}.cm{sup {minus}1} for Nd and 367 1.mol{sup {minus}1}.cm{sup {minus}1} for Er. The calibration graphs were linear up to 11.8 {mu}g/ml of Nd and 12.3 {mu}g/ml ofmore » Er, respectively. The relative standard deviations evaluated from eleven independent determinations of 7.2 {mu}g/ml (for Nd) and 8.3 {mu}g/ml (for Er) are 1.3% and 1.4%, respectively. The detection limits are 0.2 {mu}g/ml for Nd and 0.3 {mu}g/ml for Er. The method has been developed for determining those two elements in mixture of lanthanides by means of the third-derivative spectra and the analytical results obtained are satisfactory.« less
Sudden emergence of q-regular subgraphs in random graphs
NASA Astrophysics Data System (ADS)
Pretti, M.; Weigt, M.
2006-07-01
We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.
Network meta-analysis, electrical networks and graph theory.
Rücker, Gerta
2012-12-01
Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs.
Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-06-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.
Evolutionary dynamics on graphs
NASA Astrophysics Data System (ADS)
Lieberman, Erez; Hauert, Christoph; Nowak, Martin A.
2005-01-01
Evolutionary dynamics have been traditionally studied in the context of homogeneous or spatially extended populations. Here we generalize population structure by arranging individuals on a graph. Each vertex represents an individual. The weighted edges denote reproductive rates which govern how often individuals place offspring into adjacent vertices. The homogeneous population, described by the Moran process, is the special case of a fully connected graph with evenly weighted edges. Spatial structures are described by graphs where vertices are connected with their nearest neighbours. We also explore evolution on random and scale-free networks. We determine the fixation probability of mutants, and characterize those graphs for which fixation behaviour is identical to that of a homogeneous population. Furthermore, some graphs act as suppressors and others as amplifiers of selection. It is even possible to find graphs that guarantee the fixation of any advantageous mutant. We also study frequency-dependent selection and show that the outcome of evolutionary games can depend entirely on the structure of the underlying graph. Evolutionary graph theory has many fascinating applications ranging from ecology to multi-cellular organization and economics.
Global Binary Optimization on Graphs for Classification of High Dimensional Data
2014-09-01
Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies
2013-10-15
statistic,” in Artifical Intelligence and Statistics (AISTATS), 2013. [6] ——, “Detecting activity in graphs via the Graph Ellipsoid Scan Statistic... Artifical Intelligence and Statistics (AISTATS), 2013. [8] ——, “Near-optimal anomaly detection in graphs using Lovász Extended Scan Statistic,” in Neural...networks,” in Artificial Intelligence and Statistics (AISTATS), 2010. 11 [11] D. Aldous, “The random walk construction of uniform spanning trees and
A nonlinear q-voter model with deadlocks on the Watts-Strogatz graph
NASA Astrophysics Data System (ADS)
Sznajd-Weron, Katarzyna; Michal Suszczynski, Karol
2014-07-01
We study the nonlinear $q$-voter model with deadlocks on a Watts-Strogats graph. Using Monte Carlo simulations, we obtain so called exit probability and exit time. We determine how network properties, such as randomness or density of links influence exit properties of a model.
Lawton, Kathy; Kasari, Connie
2012-08-01
The vast majority of children with an autism spectrum disorder (ASD) attend public preschools at some point in their childhood. Community preschool practices often are not evidence based, and almost none target the prelinguistic core deficits of ASD. This study investigated the effectiveness of public preschool teachers implementing a validated intervention (the Joint Attention and Symbolic Play/Engagement and Regulation intervention; JASP/ER) on a core deficit of autism, initiating joint attention. Sixteen dyads (preschoolers with ASD and the public school teachers who worked in the child's classroom) were randomly assigned to the 6-week JASP/ER intervention or a control group. At the end of the intervention, JASP/ER teachers used more JASP/ER strategies than the control teachers, and JASP/ER preschoolers used more joint attention in their classroom than control children. Additionally, JASP/ER children spent more time in supported engagement and less time in object engagement than control preschoolers on a taped play interaction. Findings suggest that teachers were able to improve a core deficit of children with ASD in a public preschool context. © 2012 American Psychological Association
Robustness analysis of interdependent networks under multiple-attacking strategies
NASA Astrophysics Data System (ADS)
Gao, Yan-Li; Chen, Shi-Ming; Nie, Sen; Ma, Fei; Guan, Jun-Jie
2018-04-01
The robustness of complex networks under attacks largely depends on the structure of a network and the nature of the attacks. Previous research on interdependent networks has focused on two types of initial attack: random attack and degree-based targeted attack. In this paper, a deliberate attack function is proposed, where six kinds of deliberate attacking strategies can be derived by adjusting the tunable parameters. Moreover, the robustness of four types of interdependent networks (BA-BA, ER-ER, BA-ER and ER-BA) with different coupling modes (random, positive and negative correlation) is evaluated under different attacking strategies. Interesting conclusions could be obtained. It can be found that the positive coupling mode can make the vulnerability of the interdependent network to be absolutely dependent on the most vulnerable sub-network under deliberate attacks, whereas random and negative coupling modes make the vulnerability of interdependent network to be mainly dependent on the being attacked sub-network. The robustness of interdependent network will be enhanced with the degree-degree correlation coefficient varying from positive to negative. Therefore, The negative coupling mode is relatively more optimal than others, which can substantially improve the robustness of the ER-ER network and ER-BA network. In terms of the attacking strategies on interdependent networks, the degree information of node is more valuable than the betweenness. In addition, we found a more efficient attacking strategy for each coupled interdependent network and proposed the corresponding protection strategy for suppressing cascading failure. Our results can be very useful for safety design and protection of interdependent networks.
Fast Inbound Top-K Query for Random Walk with Restart.
Zhang, Chao; Jiang, Shan; Chen, Yucheng; Sun, Yidan; Han, Jiawei
2015-09-01
Random walk with restart (RWR) is widely recognized as one of the most important node proximity measures for graphs, as it captures the holistic graph structure and is robust to noise in the graph. In this paper, we study a novel query based on the RWR measure, called the inbound top-k (Ink) query. Given a query node q and a number k , the Ink query aims at retrieving k nodes in the graph that have the largest weighted RWR scores to q . Ink queries can be highly useful for various applications such as traffic scheduling, disease treatment, and targeted advertising. Nevertheless, none of the existing RWR computation techniques can accurately and efficiently process the Ink query in large graphs. We propose two algorithms, namely Squeeze and Ripple, both of which can accurately answer the Ink query in a fast and incremental manner. To identify the top- k nodes, Squeeze iteratively performs matrix-vector multiplication and estimates the lower and upper bounds for all the nodes in the graph. Ripple employs a more aggressive strategy by only estimating the RWR scores for the nodes falling in the vicinity of q , the nodes outside the vicinity do not need to be evaluated because their RWR scores are propagated from the boundary of the vicinity and thus upper bounded. Ripple incrementally expands the vicinity until the top- k result set can be obtained. Our extensive experiments on real-life graph data sets show that Ink queries can retrieve interesting results, and the proposed algorithms are orders of magnitude faster than state-of-the-art method.
Pre-incident Analysis using Multigraphs and Faceted Ontologies
2013-08-01
ontology for beverages, part of which is shown in the form of an entity- relationship (ER) graph in Figure 4. The entities Beer , Wine, etc. have is a...another from Beer to Grains. The terminology is suggestive: The is a type of link has already been defined (informally). The made from link...expressions derived from natural language such as Beer , is a, Grains and made from. Labels alone are insufficient for a computer system for ontology and
The Effective Resistance of the -Cycle Graph with Four Nearest Neighbors
NASA Astrophysics Data System (ADS)
Chair, Noureddine
2014-02-01
The exact expression for the effective resistance between any two vertices of the -cycle graph with four nearest neighbors , is given. It turns out that this expression is written in terms of the effective resistance of the -cycle graph , the square of the Fibonacci numbers, and the bisected Fibonacci numbers. As a consequence closed form formulas for the total effective resistance, the first passage time, and the mean first passage time for the simple random walk on the the -cycle graph with four nearest neighbors are obtained. Finally, a closed form formula for the effective resistance of with all first neighbors removed is obtained.
Grzech-Leśniak, K; Sculean, A; Gašpirc, Boris
2018-05-15
The objective of this study was to evaluate the microbiological and clinical outcomes following nonsurgical treatment by either scaling and root planing, combination of Nd:YAG and Er:YAG lasers, or by Er:YAG laser treatment alone. The study involved 60 patients with generalized chronic periodontitis, randomly assigned into one of three treatment groups of 20 patients. The first group received scaling and root planing by hand instruments (SRP group), the second group received Er:YAG laser treatment alone (Er group), and the third group received combined treatment with Nd:YAG and Er:YAG lasers (NdErNd group). Microbiological samples, taken from the periodontal pockets at baseline and 6 months after treatments, were assessed with PET Plus tests. The combined NdErNd laser (93.0%), followed closely by Er:YAG laser (84.9%), treatment resulted in the highest reduction of all bacteria count after 6 months, whereas SRP (46.2%) failed to reduce Treponema denticola, Peptostreptococcus micros, and Capnocytophaga gingivalis. Full-mouth plaque and bleeding on probing scores dropped after 6 months and were the lowest in both laser groups. The combination of NdErNd resulted in higher probing pocket depth reduction and gain of clinical attachment level (1.99 ± 0.23 mm) compared to SRP (0.86 ± 0.13 mm) or Er:YAG laser alone (0.93 ± 0.20 mm) in 4-6 mm-deep pockets. Within their limits, the present results provide support for the combination of Nd:YAG and Er:YAG lasers to additionally improve the microbiological and clinical outcomes of nonsurgical periodontal therapy in patients with moderate to severe chronic periodontitis.
Consensus, Polarization and Clustering of Opinions in Social Networks
2013-06-01
values of τ , and consensus at larger values. Fig. 6 compares the phase transitions for three different network configurations: RGG, Erdos- Renyi graph and...Erdos- Renyi graph [25] is generated uniformly at random from the collection of all graphs which have n = 50 nodes and M = 120 edges. The small- world...0.6 0.8 1 Threshold τ N or m al iz ed A lg eb ra ic C on ne ct iv ity RGG Erdos− Renyi Small−World Fig. 6. Phase transitions using three
Quantum walks of two interacting particles on percolation graphs
NASA Astrophysics Data System (ADS)
Siloi, Ilaria; Benedetti, Claudia; Piccinini, Enrico; Paris, Matteo G. A.; Bordone, Paolo
2017-10-01
We address the dynamics of two indistinguishable interacting particles moving on a dynamical percolation graph, i.e., a graph where the edges are independent random telegraph processes whose values jump between 0 and 1, thus mimicking percolation. The interplay between the particle interaction strength, initial state and the percolation rate determine different dynamical regimes for the walkers. We show that, whenever the walkers are initially localised within the interaction range, fast noise enhances the particle spread compared to the noiseless case.
Experimental Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan; Koch, Trystan; Antonsen, Thomas; Ott, Edward; Anlage, Steven; Wave Chaos Team
An experimental setup consisting of microwave networks is used to simulate quantum graphs. The networks are constructed from coaxial cables connected by T junctions. The networks are built for operation both at room temperature and superconducting versions that operate at cryogenic temperatures. In the experiments, a phase shifter is connected to one of the network bonds to generate an ensemble of quantum graphs by varying the phase delay. The eigenvalue spectrum is found from S-parameter measurements on one-port graphs. With the experimental data, the nearest-neighbor spacing statistics and the impedance statistics of the graphs are examined. It is also demonstrated that time-reversal invariance for microwave propagation in the graphs can be broken without increasing dissipation significantly by making nodes with circulators. Random matrix theory (RMT) successfully describes universal statistical properties of the system. We acknowledge support under contract AFOSR COE Grant FA9550-15-1-0171.
Localization in random bipartite graphs: Numerical and empirical study
NASA Astrophysics Data System (ADS)
Slanina, František
2017-05-01
We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.
Localization in random bipartite graphs: Numerical and empirical study.
Slanina, František
2017-05-01
We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.
How mutation affects evolutionary games on graphs
Allen, Benjamin; Traulsen, Arne; Tarnita, Corina E.; Nowak, Martin A.
2011-01-01
Evolutionary dynamics are affected by population structure, mutation rates and update rules. Spatial or network structure facilitates the clustering of strategies, which represents a mechanism for the evolution of cooperation. Mutation dilutes this effect. Here we analyze how mutation influences evolutionary clustering on graphs. We introduce new mathematical methods to evolutionary game theory, specifically the analysis of coalescing random walks via generating functions. These techniques allow us to derive exact identity-by-descent (IBD) probabilities, which characterize spatial assortment on lattices and Cayley trees. From these IBD probabilities we obtain exact conditions for the evolution of cooperation and other game strategies, showing the dual effects of graph topology and mutation rate. High mutation rates diminish the clustering of cooperators, hindering their evolutionary success. Our model can represent either genetic evolution with mutation, or social imitation processes with random strategy exploration. PMID:21473871
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Principi, Mariabeatrice; Di Leo, Alfredo; Pricci, Maria; Scavo, Maria Principia; Guido, Raffaella; Tanzi, Sabina; Piscitelli, Domenico; Pisani, Antonio; Ierardi, Enzo; Comelli, Maria Cristina; Barone, Michele
2013-01-01
AIM: To assess the safety and effect of the supplementation of a patented blend of dietary phytoestrogens and insoluble fibers on estrogen receptor (ER)-β and biological parameters in sporadic colonic adenomas. METHODS: A randomized, double-blind placebo-controlled trial was performed. Patients scheduled to undergo surveillance colonoscopy for previous sporadic colonic adenomas were identified, and 60 eligible patients were randomized to placebo or active dietary intervention (ADI) twice a day, for 60 d before surveillance colonoscopy. ADI was a mixture of 175 mg milk thistle extract, 20 mg secoisolariciresinol and 750 mg oat fiber extract. ER-β and ER-α expression, apoptosis and proliferation (Ki-67 LI) were assessed in colon samples. RESULTS: No adverse event related to ADI was recorded. ADI administration showed a significant increases in ER-β protein (0.822 ± 0.08 vs 0.768 ± 0.10, P = 0.04) and a general trend to an increase in ER-β LI (39.222 ± 2.69 vs 37.708 ± 5.31, P = 0.06), ER-β/ER-α LI ratio (6.564 ± 10.04 vs 2.437 ± 1.53, P = 0.06), terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (35.592 ± 14.97 vs 31.541 ± 11.54, P = 0.07) and Ki-67 (53.923 ± 20.91 vs 44.833 ± 10.38, P = 0.07) approximating statistical significance. A significant increase of ER-β protein (0.805 ± 0.13 vs 0.773 ± 0.13, P = 0.04), mRNA (2.278 ± 1.19 vs 1.105 ± 1.07, P < 0.02) and LI (47.533 ± 15.47 vs 34.875 ± 16.67, P < 0.05) and a decrease of ER-α protein (0.423 ± 0.06 vs 0.532 ± 0.11, P < 0.02) as well as a trend to increase of ER-β/ER-α protein in ADI vs placebo group were observed in patients without polyps (1.734 ± 0.20 vs 1.571 ± 0.42, P = 0.07). CONCLUSION: The role of ER-β on the control of apoptosis, and its amenability to dietary intervention, are supported in our study. PMID:23885143
ERIC Educational Resources Information Center
Emslie, Graham J.; Findling, Robert L.; Yeung, Paul P.; Kunz, Nadia R.; Li, Yunfeng
2007-01-01
Objective: The safety, efficacy, and tolerability of venlafaxine extended release (ER) in subjects ages 7 to 17 years with major depressive disorder were evaluated in two multicenter, randomized, double-blind, placebo-controlled trials conducted between October 1997 and August 2001. Method: Participants received venlafaxine ER (flexible dose,…
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
NASA Astrophysics Data System (ADS)
Sun, Min; Chen, Xinjian; Zhang, Zhiqiang; Ma, Chiyuan
2017-02-01
Accurate volume measurements of pituitary adenoma are important to the diagnosis and treatment for this kind of sellar tumor. The pituitary adenomas have different pathological representations and various shapes. Particularly, in the case of infiltrating to surrounding soft tissues, they present similar intensities and indistinct boundary in T1-weighted (T1W) magnetic resonance (MR) images. Then the extraction of pituitary adenoma from MR images is still a challenging task. In this paper, we propose an interactive method to segment the pituitary adenoma from brain MR data, by combining graph cuts based active contour model (GCACM) and random walk algorithm. By using the GCACM method, the segmentation task is formulated as an energy minimization problem by a hybrid active contour model (ACM), and then the problem is solved by the graph cuts method. The region-based term in the hybrid ACM considers the local image intensities as described by Gaussian distributions with different means and variances, expressed as maximum a posteriori probability (MAP). Random walk is utilized as an initialization tool to provide initialized surface for GCACM. The proposed method is evaluated on the three-dimensional (3-D) T1W MR data of 23 patients and compared with the standard graph cuts method, the random walk method, the hybrid ACM method, a GCACM method which considers global mean intensity in region forces, and a competitive region-growing based GrowCut method planted in 3D Slicer. Based on the experimental results, the proposed method is superior to those methods.
Zhou, Jian; Wang, Lusheng; Wang, Weidong; Zhou, Qingfeng
2017-01-01
In future scenarios of heterogeneous and dense networks, randomly-deployed small star networks (SSNs) become a key paradigm, whose system performance is restricted to inter-SSN interference and requires an efficient resource allocation scheme for interference coordination. Traditional resource allocation schemes do not specifically focus on this paradigm and are usually too time consuming in dense networks. In this article, a very efficient graph-based scheme is proposed, which applies the maximal independent set (MIS) concept in graph theory to help divide SSNs into almost interference-free groups. We first construct an interference graph for the system based on a derived distance threshold indicating for any pair of SSNs whether there is intolerable inter-SSN interference or not. Then, SSNs are divided into MISs, and the same resource can be repetitively used by all the SSNs in each MIS. Empirical parameters and equations are set in the scheme to guarantee high performance. Finally, extensive scenarios both dense and nondense are randomly generated and simulated to demonstrate the performance of our scheme, indicating that it outperforms the classical max K-cut-based scheme in terms of system capacity, utility and especially time cost. Its achieved system capacity, utility and fairness can be close to the near-optimal strategy obtained by a time-consuming simulated annealing search. PMID:29113109
An efficient randomized algorithm for contact-based NMR backbone resonance assignment.
Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal
2006-01-15
Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Robust-yet-fragile nature of interdependent networks
NASA Astrophysics Data System (ADS)
Tan, Fei; Xia, Yongxiang; Wei, Zhi
2015-05-01
Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.
A Statistical Method to Distinguish Functional Brain Networks
Fujita, André; Vidal, Maciel C.; Takahashi, Daniel Y.
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism (p < 0.001). PMID:28261045
A Statistical Method to Distinguish Functional Brain Networks.
Fujita, André; Vidal, Maciel C; Takahashi, Daniel Y
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism ( p < 0.001).
A chemical definition of the boundary of the Antarctic ozone hole
NASA Technical Reports Server (NTRS)
Proffitt, M. H.; Powell, J. A.; Tuck, A. F.; Fahey, D. W.; Kelly, K. K.; Krueger, A. J.; Schoeberl, M. R.; Gary, B. L.; Margitan, J. J.; Chan, K. R.
1989-01-01
A program designed to study the Antarctic ozone hole using ER-2 high-altitude and DC-8 aircraft was conducted out of Punta Arenas, Chile during August 17-September 22, 1987. Graphs are presented of ozone and chlorine monoxide when crossing the boundary of the chemically perturbed region on August 23 and on September 21. Interpretations of ClO, H2O, and N2O measurements are presented, indicating ongoing diabetic cooling and advective poleward transport across the boundary.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The ergodicity landscape of quantum theories
NASA Astrophysics Data System (ADS)
Ho, Wen Wei; Radičević, Đorđe
2018-02-01
This paper is a physicist’s review of the major conceptual issues concerning the problem of spectral universality in quantum systems. Here, we present a unified, graph-based view of all archetypical models of such universality (billiards, particles in random media, interacting spin or fermion systems). We find phenomenological relations between the onset of ergodicity (Gaussian-random delocalization of eigenstates) and the structure of the appropriate graphs, and we construct a heuristic picture of summing trajectories on graphs that describes why a generic interacting system should be ergodic. We also provide an operator-based discussion of quantum chaos and propose criteria to distinguish bases that can usefully diagnose ergodicity. The result of this analysis is a rough but systematic outline of how ergodicity changes across the space of all theories with a given Hilbert space dimension. As a particular example, we study the SYK model and report on the transition from maximal to partial ergodicity as the disorder strength is decreased.
NASA Astrophysics Data System (ADS)
Shi, Xizhi; He, Chaoyu; Pickard, Chris J.; Tang, Chao; Zhong, Jianxin
2018-01-01
A method is introduced to stochastically generate crystal structures with defined structural characteristics. Reasonable quotient graphs for symmetric crystals are constructed using a random strategy combined with space group and graph theory. Our algorithm enables the search for large-size and complex crystal structures with a specified connectivity, such as threefold sp2 carbons, fourfold sp3 carbons, as well as mixed sp2-sp3 carbons. To demonstrate the method, we randomly construct initial structures adhering to space groups from 75 to 230 and a range of lattice constants, and we identify 281 new sp3 carbon crystals. First-principles optimization of these structures show that most of them are dynamically and mechanically stable and are energetically comparable to those previously proposed. Some of the new structures can be considered as candidates to explain the experimental cold compression of graphite.
Social inertia and diversity in collaboration networks
NASA Astrophysics Data System (ADS)
Ramasco, J. J.
2007-04-01
Random graphs are useful tools to study social interactions. In particular, the use of weighted random graphs allows to handle a high level of information concerning which agents interact and in which degree the interactions take place. Taking advantage of this representation, we recently defined a magnitude, the Social Inertia, that measures the eagerness of agents to keep ties with previous partners. To study this magnitude, we used collaboration networks that are specially appropriate to obtain valid statitical results due to the large size of publically available databases. In this work, I study the Social Inertia in two of these empirical networks, IMDB movie database and condmat. More specifically, I focus on how the Inertia relates to other properties of the graphs, and show that the Inertia provides information on how the weight of neighboring edges correlates. A social interpretation of this effect is also offered.
ERIC Educational Resources Information Center
Smith, David Arthur
2010-01-01
Much recent work in natural language processing treats linguistic analysis as an inference problem over graphs. This development opens up useful connections between machine learning, graph theory, and linguistics. The first part of this dissertation formulates syntactic dependency parsing as a dynamic Markov random field with the novel…
ERIC Educational Resources Information Center
Kyer, Ben L.; Maggs, Gary E.
1995-01-01
Utilizes two-dimensional price and output graphs to demonstrate the way that the price-level elasticity of aggregate demand affects alternative monetary policy rules designed to cope with random aggregate supply shocks. Includes graphs illustrating price-level, real Gross Domestic Product (GDP), nominal GDP, and nominal money supply targeting.…
Xu, Long; Zhao, Hua; Xu, Caixia; Zhang, Siqi; Zou, Yingyin K; Zhang, Jingwen
2014-02-01
A broadband optical amplification was observed and investigated in Er3+-doped electrostrictive ceramics of lanthanum-modified lead zirconate titanate under a corona atmosphere. The ceramic structure change caused by UV light, electric field, and random walks originated from the diffusive process in intrinsically disordered materials may all contribute to the optical amplification and the associated energy storage. Discussion based on optical energy storage and diffusive equations was given to explain the findings. Those experiments performed made it possible to study random walks and optical amplification in transparent ceramics materials.
Giannelli, Marco; Formigli, Lucia; Bani, Daniele
2014-04-01
The use of lasers in periodontology is a matter of debate, mainly because of the lack of consensual therapeutic protocols. In this randomized, split-mouth trial, the clinical efficacy of two different photoablative dental lasers, erbium:yttrium-aluminum-garnet (Er:YAG) and diode, for the treatment of gingival hyperpigmentation is compared. Twenty-one patients requiring treatment for mild-to-severe gingival hyperpigmentation were enrolled. Maxillary or mandibular left or right quadrants were randomly subjected to photoablative deepithelialization with either Er:YAG or diode laser. Masked clinical assessments of each laser quadrant were made at admission and days 7, 30, and 180 postoperatively by an independent observer. Histologic examination was performed before and soon after treatment and 6 months after irradiation. Patients also compiled a subjective evaluation questionnaire. Both diode and Er:YAG lasers gave excellent results in gingival hyperpigmentation. However, Er:YAG laser induced deeper gingival tissue injury than diode laser, as judged by bleeding at surgery, delayed healing, and histopathologic analysis. The use of diode laser showed additional advantages compared to Er:YAG in terms of less postoperative discomfort and pain. This study highlights the efficacy of diode laser for photoablative deepithelialization of hyperpigmented gingiva. It is suggested that this laser can represent an effective and safe therapeutic option for gingival photoablation.
Broccoletti, Roberto; Cafaro, Adriana; Gambino, Alessio; Romagnoli, Ercole; Arduino, Paolo Giacomo
2015-12-01
The aim of this prospective study was to estimate the effects of Erbium substituted: Yttrium Aluminium Garnet (Er:YAG) laser, compared with traditional scalpel, on the early postoperative sequelae of nondysplastic oral lesion removal. There is limited evidence that laser surgery could exhibit advantages over scalpel in oral mucosal surgery. The investigators studied a cohort of 344 patients; 394 lesions were randomized and treated. Outcome statistically evaluated variables were: age, gender, the site and size of investigated lesions, visual analogue score (VAS) of pain, the Oral Health Impact Profile questionnaire (OHIP-14) and the Quality of Life test (QOL), and number of analgesics taken in the 1st week after surgery. Significant differences were found if considering the surgical time, VAS, and QOL and OHIP-14 questionnaires; regarding those data, the Er:YAG laser appeared to be faster and less painful than traditional scalpel (p < 0.05). For bigger lesions, patients statistically took more painkillers if they had undergone traditional surgery. Considering the site of the treated lesions, Er:YAG laser was less painful, especially in the gingiva and palate (p < 0.05). This is the first randomized controlled surgical trial reported for the management of nondysplastic oral lesions with the use of an Er:YAG laser. With many limitations, the present report identifies significant difference in the immediate postoperative surgical period between the two treatments, meaning that the Er:YAG laser seemed to be less painful, and better accepted by patients, than traditional scalpel.
Suter, Valerie G A; Altermatt, Hans Jörg; Bornstein, Michael M
2017-04-01
This study was conducted in order to compare clinical and histopathological outcomes for excisional biopsies when using pulsed CO 2 laser versus Er:YAG laser. Patients (n = 32) with a fibrous hyperplasia in the buccal mucosa were randomly allocated to the CO 2 (140 Hz, 400 μs, 33 mJ) or the Er:YAG laser (35 Hz, 297 μs, 200 mJ) group. The duration of excision, intraoperative bleeding and methods to stop the bleeding, postoperative pain (VAS; ranging 0-100), the use of analgesics, and the width of the thermal damage zone (μm) were recorded and compared between the two groups. The median duration of the intervention was 209 s, and there was no significant difference between the two methods. Intraoperative bleeding occurred in 100% of the excisions with Er:YAG and 56% with CO 2 laser (p = 0.007). The median thermal damage zone was 74.9 μm for CO 2 and 34.0 μm for Er:YAG laser (p < 0.0001). The median VAS score on the evening after surgery was 5 for the CO 2 laser and 3 for the Er:YAG group. To excise oral soft tissue lesions, CO 2 and Er:YAG lasers are both valuable tools with a short time of intervention and postoperative low pain. More bleeding occurs with the Er:YAG than CO 2 laser, but the lower thermal effect of Er:YAG laser seems advantageous for histopathological evaluation.
Mizuno, Yoshikuni; Yamamoto, Mitsutoshi; Kuno, Sadako; Hasegawa, Kazuko; Hattori, Nobutaka; Kagimura, Tatsuro; Sarashina, Akiko; Rascol, Olivier; Schapira, Anthony H V; Barone, Paolo; Hauser, Robert A; Poewe, Werner
2012-01-01
To compare the efficacy, safety, tolerability, and trough plasma levels of pramipexole extended-release (ER) and pramipexole immediate-release (IR), and to assess the effects of overnight switching from an IR to an ER formulation, in L-dopa-treated patients with Parkinson disease (PD). After a 1- to 4-week screening/enrollment, 112 patients who had exhibited L-dopa-related problems or were receiving suboptimal L-dopa dosage were randomized in double-blind, double-dummy, 1:1 fashion to pramipexole ER once daily or pramipexole IR 2 to 3 times daily for 12 weeks, both titrated to a maximum daily dose of 4.5 mg. Successful completers of double-blind treatment were switched to open-label pramipexole ER, beginning with a 4-week dose-adjustment phase. Among the double-blind treatment patients (n = 56 in each group), Unified Parkinson's Disease Rating Scale Parts II+III total scores decreased significantly from baseline and to a similar degree with pramipexole ER and IR formulations. In each group, 47 double-blind patients (83.9%) reported adverse events (AEs), requiring withdrawal of 3 ER patients (5.4%) and 2 IR patients (3.6%). Trough plasma levels at steady state (at the same doses and dose-normalized concentrations) were also similar with both formulations. Among open-label treatment patients (n = 53 from IR to ER), 83% were successfully switched (no worsening of PD symptoms) to pramipexole ER. In L-dopa-treated patients, pramipexole ER and pramipexole IR demonstrated similar efficacy, safety, tolerability, and trough plasma levels. Patients can be safely switched overnight from pramipexole IR to pramipexole ER with no impact on efficacy.
Lane, Hannah; Porter, Kathleen J; Hecht, Erin; Harris, Priscilla; Kraak, Vivica; Zoellner, Jamie
2017-01-01
To test the feasibility of Kids SIP smartER, a school-based intervention to reduce consumption of sugar-sweetened beverages (SSBs). Matched-contact randomized crossover study with mixed-methods analysis. One middle school in rural, Appalachian Virginia. Seventy-four sixth and seventh graders (5 classrooms) received Kids SIP smartER in random order over 2 intervention periods. Feasibility outcomes were assessed among 2 teachers. Kids SIP smartER consisted of 6 lessons grounded in the Theory of Planned Behavior, media literacy, and public health literacy and aimed to improve individual SSB behaviors and understanding of media literacy and prevalent regional disparities. The matched-contact intervention promoted physical activity. Beverage Intake Questionnaire-15 (SSB consumption), validated theory questionnaires, feasibility questionnaires (student and teacher), student focus groups, teacher interviews, and process data (eg, attendance). Repeated measures analysis of variances across 3 time points, descriptive statistics, and deductive analysis of qualitative data. During the first intervention period, students receiving Kids SIP smartER (n = 43) significantly reduced SSBs by 11 ounces/day ( P = .01) and improved media ( P < .001) and public health literacy ( P < .01) understanding; however, only media literacy showed between-group differences ( P < .01). Students and teachers found Kids SIP smartER acceptable, in-demand, practical, and implementable within existing resources. Kids SIP smartER is feasible in an underresourced, rural school setting. Results will inform further development and large-scale testing of Kids SIP smartER to reduce SSBs among rural adolescents.
Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David
2018-02-01
To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Crist, Michele R.; Knick, Steven T.; Hanser, Steven E.
2017-01-01
The delineation of priority areas in western North America for managing Greater Sage-Grouse (Centrocercus urophasianus) represents a broad-scale experiment in conservation biology. The strategy of limiting spatial disturbance and focusing conservation actions within delineated areas may benefit the greatest proportion of Greater Sage-Grouse. However, land use under normal restrictions outside priority areas potentially limits dispersal and gene flow, which can isolate priority areas and lead to spatially disjunct populations. We used graph theory, representing priority areas as spatially distributed nodes interconnected by movement corridors, to understand the capacity of priority areas to function as connected networks in the Bi-State, Central, and Washington regions of the Greater Sage-Grouse range. The Bi-State and Central networks were highly centralized; the dominant pathways and shortest linkages primarily connected a small number of large and centrally located priority areas. These priority areas are likely strongholds for Greater Sage-Grouse populations and might also function as refugia and sources. Priority areas in the Central network were more connected than those in the Bi-State and Washington networks. Almost 90% of the priority areas in the Central network had ≥2 pathways to other priority areas when movement through the landscape was set at an upper threshold (effective resistance, ER12). At a lower threshold (ER4), 83 of 123 priority areas in the Central network were clustered in 9 interconnected subgroups. The current conservation strategy has risks; 45 of 61 priority areas in the Bi-State network, 68 of 123 in the Central network, and all 4 priority areas in the Washington network had ≤1 connection to another priority area at the lower ER4threshold. Priority areas with few linkages also averaged greater environmental resistance to movement along connecting pathways. Without maintaining corridors to larger priority areas or a clustered group, isolation of small priority areas could lead to regional loss of Greater Sage-Grouse
NASA Astrophysics Data System (ADS)
Zhou, Hang
Quantum walks are the quantum mechanical analogue of classical random walks. Discrete-time quantum walks have been introduced and studied mostly on the line Z or higher dimensional space Zd but rarely defined on graphs with fractal dimensions because the coin operator depends on the position and the Fourier transform on the fractals is not defined. Inspired by its nature of classical walks, different quantum walks will be defined by choosing different shift and coin operators. When the coin operator is uniform, the results of classical walks will be obtained upon measurement at each step. Moreover, with measurement at each step, our results reveal more information about the classical random walks. In this dissertation, two graphs with fractal dimensions will be considered. The first one is Sierpinski gasket, a degree-4 regular graph with Hausdorff dimension of df = ln 3/ ln 2. The second is the Cantor graph derived like Cantor set, with Hausdorff dimension of df = ln 2/ ln 3. The definitions and amplitude functions of the quantum walks will be introduced. The main part of this dissertation is to derive a recursive formula to compute the amplitude Green function. The exiting probability will be computed and compared with the classical results. When the generation of graphs goes to infinity, the recursion of the walks will be investigated and the convergence rates will be obtained and compared with the classical counterparts.
NASA Astrophysics Data System (ADS)
Zhou, Lifan; Chai, Dengfeng; Xia, Yu; Ma, Peifeng; Lin, Hui
2018-01-01
Phase unwrapping (PU) is one of the key processes in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) data. It is known that two-dimensional (2-D) PU problems can be formulated as maximum a posteriori estimation of Markov random fields (MRFs). However, considering that the traditional MRF algorithm is usually defined on a rectangular grid, it fails easily if large parts of the wrapped data are dominated by noise caused by large low-coherence area or rapid-topography variation. A PU solution based on sparse MRF is presented to extend the traditional MRF algorithm to deal with sparse data, which allows the unwrapping of InSAR data dominated by high phase noise. To speed up the graph cuts algorithm for sparse MRF, we designed dual elementary graphs and merged them to obtain the Delaunay triangle graph, which is used to minimize the energy function efficiently. The experiments on simulated and real data, compared with other existing algorithms, both confirm the effectiveness of the proposed MRF approach, which suffers less from decorrelation effects caused by large low-coherence area or rapid-topography variation.
Learning of Multimodal Representations With Random Walks on the Click Graph.
Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting
2016-02-01
In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.
Automatic Nanodesign Using Evolutionary Techniques
NASA Technical Reports Server (NTRS)
Globus, Al; Saini, Subhash (Technical Monitor)
1998-01-01
Many problems associated with the development of nanotechnology require custom designed molecules. We use genetic graph software, a new development, to automatically evolve molecules of interest when only the requirements are known. Genetic graph software designs molecules, and potentially nanoelectronic circuits, given a fitness function that determines which of two molecules is better. A set of molecules, the first generation, is generated at random then tested with the fitness function, Subsequent generations are created by randomly choosing two parent molecules with a bias towards high scoring molecules, tearing each molecules in two at random, and mating parts from the mother and father to create two children. This procedure is repeated until a satisfactory molecule is found. An atom pair similarity test is currently used as the fitness function to evolve molecules similar to existing pharmaceuticals.
Dalfampridine in Parkinson's disease related gait dysfunction: A randomized double blind trial.
Luca, Corneliu C; Nadayil, Gloria; Dong, Chuanhui; Nahab, Fatta B; Field-Fote, Edelle; Singer, Carlos
2017-08-15
Disease-related gait dysfunction causes extensive disability for persons with Parkinson's disease (PD), with no effective therapies currently available. The potassium channel blocker dalfampridine has been used in multiple neurological conditions and improves walking in persons with multiple sclerosis. We aimed to evaluate the effect of dalfampridine extended release (D-ER) 10mg tablets twice daily on different domains of walking in participants with PD. Twenty-two participants with PD and gait dysfunction were randomized to receive D-ER 10mg twice daily or placebo for 4weeks in a crossover design with a 2-week washout period. The primary outcomes were change in the gait velocity and stride length. At 4weeks, gait velocity was not significantly different between D-ER (0.89m/s±0.33) and placebo (0.93m/s±0.27) conditions. The stride length was also similar between conditions: 0.96m±0.38 for D-ER versus 1.06m±0.33 for placebo. D-ER was generally well tolerated with the most frequent side effects being dizziness, nausea and balance problems. D-ER is well tolerated in PD patients, however it did not show significant benefit for gait impairment. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tkačik, Gašper
2016-07-01
The article by O. Martin and colleagues provides a much needed systematic review of a body of work that relates the topological structure of genetic regulatory networks to evolutionary selection for function. This connection is very important. Using the current wealth of genomic data, statistical features of regulatory networks (e.g., degree distributions, motif composition, etc.) can be quantified rather easily; it is, however, often unclear how to interpret the results. On a graph theoretic level the statistical significance of the results can be evaluated by comparing observed graphs to ;randomized; ones (bravely ignoring the issue of how precisely to randomize!) and comparing the frequency of appearance of a particular network structure relative to a randomized null expectation. While this is a convenient operational test for statistical significance, its biological meaning is questionable. In contrast, an in-silico genotype-to-phenotype model makes explicit the assumptions about the network function, and thus clearly defines the expected network structures that can be compared to the case of no selection for function and, ultimately, to data.
A scale-free network with limiting on vertices
NASA Astrophysics Data System (ADS)
Tang, Lian; Wang, Bin
2010-05-01
We propose and analyze a random graph model which explains a phenomena in the economic company network in which company may not expand its business at some time due to the limiting of money and capacity. The random graph process is defined as follows: at any time-step t, (i) with probability α(k) and independently of other time-step, each vertex vi (i≤t-1) is inactive which means it cannot be connected by more edges, where k is the degree of vi at the time-step t; (ii) a new vertex vt is added along with m edges incident with vt at one time and its neighbors are chosen in the manner of preferential attachment. We prove that the degree distribution P(k) of this random graph process satisfies P(k)∝C1k if α(ṡ) is a constant α0; and P(k)∝C2k-3 if α(ℓ)↓0 as ℓ↑∞, where C1,C2 are two positive constants. The analytical result is found to be in good agreement with that obtained by numerical simulations. Furthermore, we get the degree distributions in this model with m-varying functions by simulation.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
Faster quantum walk search on a weighted graph
NASA Astrophysics Data System (ADS)
Wong, Thomas G.
2015-09-01
A randomly walking quantum particle evolving by Schrödinger's equation searches for a unique marked vertex on the "simplex of complete graphs" in time Θ (N3 /4) . We give a weighted version of this graph that preserves vertex transitivity, and we show that the time to search on it can be reduced to nearly Θ (√{N }) . To prove this, we introduce two extensions to degenerate perturbation theory: an adjustment that distinguishes the weights of the edges and a method to determine how precisely the jumping rate of the quantum walk must be chosen.
Silva, Raul R; Brams, Matthew; McCague, Kevin; Pestreich, Linda; Muniz, Rafael
2013-01-01
This study aimed to compare the effects of dexmethylphenidate (D-MPH) extended-release (ER) 30 mg and D-MPH-ER 20 mg on attention, behavior, and performance in children with attention-deficit/hyperactivity disorder. In a randomized, double-blind, 3-period-by-3-treatment, crossover study, children aged 6 to 12 years with attention-deficit/hyperactivity disorder stabilized on methylphenidate (40-60 mg/d) or D-MPH (20-30 mg/d) received D-MPH-ER 20 mg/d, 30 mg/d, and placebo for 7 days each (final dose of each treatment period administered in a laboratory classroom). Swanson, Kotkin, Agler, M-Flynn, and Pelham (SKAMP) Combined (Attention and Deportment) rating scale and Permanent Product Measure of Performance (PERMP) math test assessments were conducted at baseline and 3, 6, 9, 10, 11, and 12 hours postdose. A total of 165 children (94 boys; mean age, 9.6 years) were randomized (162 included in intent-to-treat analyses). Significant improvements were noted for D-MPH-ER 30 mg over D-MPH-ER 20 mg at various late time points on the SKAMP scales (Combined scores at 9, 10, 11, and 12 hours postdose; Attention scores at 10, 11, and 12 hours postdose; deportment scores at 9 and 12 hours postdose). The PERMP math test-attempted and -correct scores (change from predose) were significantly higher with D-MPH-ER 30 mg than with D-MPH-ER 20 mg at 10, 11, and 12 hours postdose. Both D-MPH-ER doses were superior to placebo at all time points. D-MPH-ER 30 mg was superior to D-MPH-ER 20 mg at later time points in the day, suggesting that higher doses of D-MPH-ER may be more effective later in the day.
Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics
NASA Astrophysics Data System (ADS)
Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal
2017-12-01
Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.
A simple method for finding the scattering coefficients of quantum graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cottrell, Seth S.
2015-09-15
Quantum walks are roughly analogous to classical random walks, and similar to classical walks they have been used to find new (quantum) algorithms. When studying the behavior of large graphs or combinations of graphs, it is useful to find the response of a subgraph to signals of different frequencies. In doing so, we can replace an entire subgraph with a single vertex with variable scattering coefficients. In this paper, a simple technique for quickly finding the scattering coefficients of any discrete-time quantum graph will be presented. These scattering coefficients can be expressed entirely in terms of the characteristic polynomial ofmore » the graph’s time step operator. This is a marked improvement over previous techniques which have traditionally required finding eigenstates for a given eigenvalue, which is far more computationally costly. With the scattering coefficients we can easily derive the “impulse response” which is the key to predicting the response of a graph to any signal. This gives us a powerful set of tools for rapidly understanding the behavior of graphs or for reducing a large graph into its constituent subgraphs regardless of how they are connected.« less
Potential emerging treatment in vitiligo using Er:YAG in combination with 5FU and clobetasol.
Mokhtari, Fatemeh; Bostakian, Anis; Shahmoradi, Zabihollah; Jafari-Koshki, Tohid; Iraji, Fariba; Faghihi, Gita; Hosseini, Sayed Mohsen; Bafandeh, Behzad
2018-04-01
Vitiligo is a pigmentary disorder of skin affecting at least 1% of the world population of all races in both sexes. Its importance is mainly due to subsequent social and psychological problems rather than clinical complications. Various treatment choices are available for vitiligo; however, laser-based courses have shown to give more acceptable results. The aim of this trial was to evaluate the efficacy of Er:YAG laser as a supplementary medicine to topical 5FU and clobetasol in vitiligo patients. Two comparable vitiligo patches from 38 eligible patients were randomized to receive topical 5FU and clobetasol in control group and additional Er:YAG laser in intervention group. Major outcomes of interest were the size of patch and pigmentation score at randomization and 2 and 4 months after therapy. Final sample included 18 (47%) male patients and age of 35.66±8.04. The performance Er:YAG group was superior in all sites. Reduction in the size of patches was greater in Er:YAG group (p-value=.004). Also, this group showed a higher pigmentation scores in the trial period than control group (p-value<.001). Greater reduction in the size and increase in pigmentation score was seen in Er:YAG group especially for short periods after therapy and repeating laser sessions may help improving final outcomes. Er:AYG could help in reducing complications of long-term topical treatments, achieving faster response, and improving patient adherence. © 2017 Wiley Periodicals, Inc.
Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-12-01
Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.
An In-Depth Analysis of the Chung-Lu Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, M.; DeSterck, H.; Sanders, G.
2015-10-28
In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less
Euclidean commute time distance embedding and its application to spectral anomaly detection
NASA Astrophysics Data System (ADS)
Albano, James A.; Messinger, David W.
2012-06-01
Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.
Dynamic graph cuts for efficient inference in Markov Random Fields.
Kohli, Pushmeet; Torr, Philip H S
2007-12-01
Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.
Network Reliability: The effect of local network structure on diffusive processes
Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen
2014-01-01
This paper re-introduces the network reliability polynomial – introduced by Moore and Shannon in 1956 – for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well-suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable. PMID:24329321
Ensembles of physical states and random quantum circuits on graphs
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo
2012-11-01
In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).
LeWitt, Peter A; Verhagen Metman, Leo; Rubens, Robert; Khanna, Sarita; Kell, Sherron; Gupta, Suneel
Extended-release (ER) carbidopa-levodopa (CD-LD) (IPX066/RYTARY/NUMIENT) produces improvements in "off" time, "on" time without troublesome dyskinesia, and Unified Parkinson Disease Rating Scale scores compared with immediate-release (IR) CD-LD or IR CD-LD plus entacapone (CLE). Post hoc analyses of 2 ER CD-LD phase 3 trials evaluated whether the efficacy and safety of ER CD-LD relative to the respective active comparators were altered by concomitant medications (dopaminergic agonists, monoamine oxidase B [MAO-B] inhibitors, or amantadine). ADVANCE-PD (n = 393) assessed safety and efficacy of ER CD-LD versus IR CD-LD. ASCEND-PD (n = 91) evaluated ER CD-LD versus CLE. In both studies, IR- and CLE-experienced patients underwent a 6-week, open-label dose-conversion period to ER CD-LD prior to randomization. For analysis, the randomized population was divided into 3 subgroups: dopaminergic agonists, rasagiline or selegiline, and amantadine. For each subgroup, changes from baseline in PD diary measures ("off" time and "on" time with and without troublesome dyskinesia), Unified Parkinson Disease Rating Scale Parts II + III scores, and adverse events were analyzed, comparing ER CD-LD with the active comparator. Concomitant dopaminergic agonist or MAO-B inhibitor use did not diminish the efficacy (improvement in "off" time and "on" time without troublesome dyskinesia) of ER CD-LD compared with IR CD-LD or CLE, whereas the improvement with concomitant amantadine failed to reach significance. Safety and tolerability were similar among the subgroups, and ER CD-LD did not increase troublesome dyskinesia. For patients on oral LD regimens and taking a dopaminergic agonist, and/or a MAO-B inhibitor, changing from an IR to an ER CD-LD formulation provides approximately an additional hour of "good" on time.
LeWitt, Peter A.; Verhagen Metman, Leo; Rubens, Robert; Khanna, Sarita; Kell, Sherron; Gupta, Suneel
2018-01-01
Objectives Extended-release (ER) carbidopa-levodopa (CD-LD) (IPX066/RYTARY/NUMIENT) produces improvements in “off” time, “on” time without troublesome dyskinesia, and Unified Parkinson Disease Rating Scale scores compared with immediate-release (IR) CD-LD or IR CD-LD plus entacapone (CLE). Post hoc analyses of 2 ER CD-LD phase 3 trials evaluated whether the efficacy and safety of ER CD-LD relative to the respective active comparators were altered by concomitant medications (dopaminergic agonists, monoamine oxidase B [MAO-B] inhibitors, or amantadine). Methods ADVANCE-PD (n = 393) assessed safety and efficacy of ER CD-LD versus IR CD-LD. ASCEND-PD (n = 91) evaluated ER CD-LD versus CLE. In both studies, IR- and CLE-experienced patients underwent a 6-week, open-label dose-conversion period to ER CD-LD prior to randomization. For analysis, the randomized population was divided into 3 subgroups: dopaminergic agonists, rasagiline or selegiline, and amantadine. For each subgroup, changes from baseline in PD diary measures (“off” time and “on” time with and without troublesome dyskinesia), Unified Parkinson Disease Rating Scale Parts II + III scores, and adverse events were analyzed, comparing ER CD-LD with the active comparator. Results and Conclusions Concomitant dopaminergic agonist or MAO-B inhibitor use did not diminish the efficacy (improvement in “off” time and “on” time without troublesome dyskinesia) of ER CD-LD compared with IR CD-LD or CLE, whereas the improvement with concomitant amantadine failed to reach significance. Safety and tolerability were similar among the subgroups, and ER CD-LD did not increase troublesome dyskinesia. For patients on oral LD regimens and taking a dopaminergic agonist, and/or a MAO-B inhibitor, changing from an IR to an ER CD-LD formulation provides approximately an additional hour of “good” on time. PMID:29432286
van de Water, Willemien; Fontein, Duveken B Y; van Nes, Johanna G H; Bartlett, John M S; Hille, Elysée T M; Putter, Hein; Robson, Tammy; Liefers, Gerrit-Jan; Roumen, Rudi M H; Seynaeve, Caroline; Dirix, Luc Y; Paridaens, Robert; Kranenbarg, Elma Meershoek-Klein; Nortier, Johan W R; van de Velde, Cornelis J H
2013-01-01
Multiple studies suggest better efficacy of chemotherapy in invasive ductal breast carcinomas (IDC) than invasive lobular breast carcinomas (ILC). However, data on efficacy of adjuvant endocrine therapy regimens and histological subtypes are sparse. This study assessed endocrine therapy efficacy in IDC and ILC. The influence of semi-quantitative oestrogen receptor (ER) expression by Allred score was also investigated. Dutch and Belgian patients enrolled in the Tamoxifen Exemestane Adjuvant Multinational (TEAM) trial were randomized to exemestane (25mg daily) alone or following tamoxifen (20mg daily) for 5 years. Inclusion was restricted to IDC and ILC patients. Histological subtype was assessed locally; ER expression was centrally reviewed according to Allred score (ER-poor (<7; n=235); ER-rich (7; n=1789)). Primary end-point was relapse-free survival (RFS), which was the time from randomization to disease relapse. Overall, 2140 (82%) IDC and 463 (18%) ILC patients were included. RFS was similar for both endocrine treatment regimens in IDC (hazard ratio (HR) for exemestane was 0.83 (95%confidence interval (CI) 0.67-1.03)), and ILC (HR 0.69 (95%CI 0.45-1.06)). Irrespective of histological subtype, patients with ER-rich Allred scores allocated to exemestane alone had an improved RFS (multivariable HR 0.71 (95%CI 0.56-0.89)). In contrast, patients with ER-poor Allred scores allocated to exemestane had a worse RFS (multivariable HR 2.33 (95%CI 1.32-4.11)). Significant effect modification by ER-Allred score was confirmed (multivariable p=0.003). Efficacy of endocrine therapy regimens was similar for IDC and ILC. However, ER-rich patients showed superior efficacy to upfront exemestane, while ER-poor patients had better outcomes with sequential therapy, irrespective of histological subtype, emphasising the relevance of quantification of ER expression. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sarafidis, P A; Lazaridis, A A; Imprialos, K P; Georgianos, P I; Avranas, K A; Protogerou, A D; Doumas, M N; Athyros, V G; Karagiannis, A I
2016-12-01
Ambulatory blood pressure monitoring is an important tool in hypertension diagnosis and management. Although several ambulatory devices exist, comparative studies are scarce. This study aimed to compare for the first time brachial blood pressure levels of Spacelabs 90217A and Mobil-O-Graph NG, under static and ambulatory conditions. We examined 40 healthy individuals under static (study A) and ambulatory (study B) conditions. In study A, participants were randomized into two groups that included blood pressure measurements with mercury sphygmomanometer, Spacelabs and Mobil-O-Graph devices with reverse order of recordings. In study B, simultaneous 6-h recordings with both devices were performed with participants randomized in two sequences of device positioning with arm reversal at 3 h. Finally, all the participants filled in a questionnaire rating their overall preference for a device. In study A, brachial systolic blood pressure (117.2±10.3 vs 117.1±9.8 mm Hg, P=0.943) and diastolic blood pressure (73.3±9.4 mm Hg vs 74.1±9.4 mm Hg, P=0.611) did not differ between Spacelabs and Mobil-O-Graph or vs sphygmomanometer (117.8±11.1 mm Hg, P=0.791 vs Spacelabs, P=0.753 vs Mobil-O-Graph). Similarly, no differences were found in ambulatory systolic blood pressure (117.9±11.4 vs 118.3±11.0 mm Hg, P=0.864), diastolic blood pressure (73.7±7.4 vs 74.7±8.0 mm Hg, P=0.571), mean blood pressure and heart rate between Spacelabs and Mobil-O-Graph. Correlation analyses and Bland-Altman plots showed agreement between the monitors. Overall, the participants showed a preference for the Mobil-O-Graph. Spacelabs 90217A and Mobil-O-Graph NG provide practically identical measurements during the static and ambulatory conditions in healthy individuals and can be rather used interchangeably in clinical practice.
Summing Feynman graphs by Monte Carlo: Planar ϕ3-theory and dynamically triangulated random surfaces
NASA Astrophysics Data System (ADS)
Boulatov, D. V.; Kazakov, V. A.
1988-12-01
New combinatorial identities are suggested relating the ratio of (n - 1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γstr (string susceptibility) in planar ϕ3-theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D = 1 the exact critical properties of the theory are reproduced numerically. After August 3, 1988 the address will be: Cybernetics Council, Academy of Science, ul. Vavilova 40, 117333 Moscow, USSR.
Quantum Algorithms Based on Physical Processes
2013-12-03
quantum walks with hard-core bosons and the graph isomorphism problem,” American Physical Society March meeting, March 2011 Kenneth Rudinger, John...King Gamble, Mark Wellons, Mark Friesen, Dong Zhou, Eric Bach, Robert Joynt, and S.N. Coppersmith, “Quantum random walks of non-interacting bosons on...and noninteracting Bosons to distinguish nonisomorphic graphs. 1) We showed that quantum walks of two hard-core Bosons can distinguish all pairs of
Quantum Algorithms Based on Physical Processes
2013-12-02
quantum walks with hard-core bosons and the graph isomorphism problem,” American Physical Society March meeting, March 2011 Kenneth Rudinger, John...King Gamble, Mark Wellons, Mark Friesen, Dong Zhou, Eric Bach, Robert Joynt, and S.N. Coppersmith, “Quantum random walks of non-interacting bosons on...and noninteracting Bosons to distinguish nonisomorphic graphs. 1) We showed that quantum walks of two hard-core Bosons can distinguish all pairs of
Optimizing spread dynamics on graphs by message passing
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.
2013-09-01
Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).
Clustering in complex directed networks
NASA Astrophysics Data System (ADS)
Fagiolo, Giorgio
2007-08-01
Many empirical networks display an inherent tendency to cluster, i.e., to form circles of connected nodes. This feature is typically measured by the clustering coefficient (CC). The CC, originally introduced for binary, undirected graphs, has been recently generalized to weighted, undirected networks. Here we extend the CC to the case of (binary and weighted) directed networks and we compute its expected value for random graphs. We distinguish between CCs that count all directed triangles in the graph (independently of the direction of their edges) and CCs that only consider particular types of directed triangles (e.g., cycles). The main concepts are illustrated by employing empirical data on world-trade flows.
The complex network of the Brazilian Popular Music
NASA Astrophysics Data System (ADS)
de Lima e Silva, D.; Medeiros Soares, M.; Henriques, M. V. C.; Schivani Alves, M. T.; de Aguiar, S. G.; de Carvalho, T. P.; Corso, G.; Lucena, L. S.
2004-02-01
We study the Brazilian Popular Music in a network perspective. We call the Brazilian Popular Music Network, BPMN, the graph where the vertices are the song writers and the links are determined by the existence of at least a common singer. The linking degree distribution of such graph shows power law and exponential regions. The exponent of the power law is compatible with the values obtained by the evolving network algorithms seen in the literature. The average path length of the BPMN is similar to the correspondent random graph, its clustering coefficient, however, is significantly larger. These results indicate that the BPMN forms a small-world network.
Graph-based analysis of kinetics on multidimensional potential-energy surfaces.
Okushima, T; Niiyama, T; Ikeda, K S; Shimizu, Y
2009-09-01
The aim of this paper is twofold: one is to give a detailed description of an alternative graph-based analysis method, which we call saddle connectivity graph, for analyzing the global topography and the dynamical properties of many-dimensional potential-energy landscapes and the other is to give examples of applications of this method in the analysis of the kinetics of realistic systems. A Dijkstra-type shortest path algorithm is proposed to extract dynamically dominant transition pathways by kinetically defining transition costs. The applicability of this approach is first confirmed by an illustrative example of a low-dimensional random potential. We then show that a coarse-graining procedure tailored for saddle connectivity graphs can be used to obtain the kinetic properties of 13- and 38-atom Lennard-Jones clusters. The coarse-graining method not only reduces the complexity of the graphs, but also, with iterative use, reveals a self-similar hierarchical structure in these clusters. We also propose that the self-similarity is common to many-atom Lennard-Jones clusters.
Fallu, Angelo; Dabouz, Farida; Furtado, Melissa; Anand, Leena; Katzman, Martin A
2016-08-01
Attention-deficit/hyperactivity disorder (ADHD) is a common neurobehavioral disorder with onset during childhood. Multiple aspects of a child's development are hindered, in both home and school settings, with negative impacts on social, emotional, and cognitive functioning. If left untreated, ADHD is commonly associated with poor academic achievement and low occupational status, as well as increased risk of substance abuse and delinquency. The objective of this study was to evaluate adult ADHD subject reported outcomes when switched from a stable dose of CONCERTA(®) to the same dose of generic Novo-methylphenidate ER-C(®). Randomized, double-blind, cross-over, phase IV trial consisted of two phases in which participants with a primary diagnosis of ADHD were randomized in a 1:1 ratio to 3 weeks of treatment with CONCERTA or generic Novo-Methylphenidate ER-C. Following 3 weeks of treatment, participants were crossed-over to receive the other treatment for an additional 3 weeks. Primary efficacy was assessed through the use of the Treatment Satisfaction Questionnaire for Medication, Version II (TSQM-II). Participants with ADHD treated with CONCERTA were more satisfied in terms of efficacy and side effects compared to those receiving an equivalent dose of generic Novo-Methylphenidate ER-C. All participants chose to continue with CONCERTA treatment at the conclusion of the study. Although CONCERTA and generic Novo-Methylphenidate ER-C have been deemed bioequivalent, however the present findings demonstrate clinically and statistically significant differences between generic and branded CONCERTA. Further investigation of these differences is warranted.
Connectivity is a Poor Indicator of Fast Quantum Search
NASA Astrophysics Data System (ADS)
Meyer, David A.; Wong, Thomas G.
2015-03-01
A randomly walking quantum particle evolving by Schrödinger's equation searches on d -dimensional cubic lattices in O (√{N }) time when d ≥5 , and with progressively slower runtime as d decreases. This suggests that graph connectivity (including vertex, edge, algebraic, and normalized algebraic connectivities) is an indicator of fast quantum search, a belief supported by fast quantum search on complete graphs, strongly regular graphs, and hypercubes, all of which are highly connected. In this Letter, we show this intuition to be false by giving two examples of graphs for which the opposite holds true: one with low connectivity but fast search, and one with high connectivity but slow search. The second example is a novel two-stage quantum walk algorithm in which the walking rate must be adjusted to yield high search probability.
Phase transitions in the quadratic contact process on complex networks
NASA Astrophysics Data System (ADS)
Varghese, Chris; Durrett, Rick
2013-06-01
The quadratic contact process (QCP) is a natural extension of the well-studied linear contact process where infected (1) individuals infect susceptible (0) neighbors at rate λ and infected individuals recover (10) at rate 1. In the QCP, a combination of two 1's is required to effect a 01 change. We extend the study of the QCP, which so far has been limited to lattices, to complex networks. We define two versions of the QCP: vertex-centered (VQCP) and edge-centered (EQCP) with birth events 1-0-11-1-1 and 1-1-01-1-1, respectively, where “-” represents an edge. We investigate the effects of network topology by considering the QCP on random regular, Erdős-Rényi, and power-law random graphs. We perform mean-field calculations as well as simulations to find the steady-state fraction of occupied vertices as a function of the birth rate. We find that on the random regular and Erdős-Rényi graphs, there is a discontinuous phase transition with a region of bistability, whereas on the heavy-tailed power-law graph, the transition is continuous. The critical birth rate is found to be positive in the former but zero in the latter.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆
Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-01-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680
Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2018-05-01
Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.
Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2018-06-01
Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
Jack, Darby W; Asante, Kwaku Poku; Wylie, Blair J; Chillrud, Steve N; Whyatt, Robin M; Ae-Ngibise, Kenneth A; Quinn, Ashlinn K; Yawson, Abena Konadu; Boamah, Ellen Abrafi; Agyei, Oscar; Mujtaba, Mohammed; Kaali, Seyram; Kinney, Patrick; Owusu-Agyei, Seth
2015-09-22
Household air pollution exposure is a major health risk, but validated interventions remain elusive. The Ghana Randomized Air Pollution and Health Study (GRAPHS) is a cluster-randomized trial that evaluates the efficacy of clean fuels (liquefied petroleum gas, or LPG) and efficient biomass cookstoves in the Brong-Ahafo region of central Ghana. We recruit pregnant women into LPG, efficient cookstove, and control arms and track birth weight and physician-assessed severe pneumonia incidence in the first year of life. A woman is eligible to participate if she is in the first or second trimester of pregnancy and carrying a live singleton fetus, if she is the primary cook, and if she does not smoke. We hypothesize that babies born to intervention mothers will weigh more and will have fewer cases of physician-assessed severe pneumonia in the first year of life. Additionally, an extensive personal air pollution exposure monitoring effort opens the way for exposure-response analyses, which we will present alongside intention-to-treat analyses. Major funding was provided by the National Institute of Environmental Health Sciences, The Thrasher Research Fund, and the Global Alliance for Clean Cookstoves. Household air pollution exposure is a major health risk that requires well-tested interventions. GRAPHS will provide important new evidence on the efficacy of both efficient biomass cookstoves and LPG, and will thus help inform health and energy policies in developing countries. The trial was registered with clinicaltrials.gov on 13 April 2011 with the identifier NCT01335490 .
A Weighted Configuration Model and Inhomogeneous Epidemics
NASA Astrophysics Data System (ADS)
Britton, Tom; Deijfen, Maria; Liljeros, Fredrik
2011-12-01
A random graph model with prescribed degree distribution and degree dependent edge weights is introduced. Each vertex is independently equipped with a random number of half-edges and each half-edge is assigned an integer valued weight according to a distribution that is allowed to depend on the degree of its vertex. Half-edges with the same weight are then paired randomly to create edges. An expression for the threshold for the appearance of a giant component in the resulting graph is derived using results on multi-type branching processes. The same technique also gives an expression for the basic reproduction number for an epidemic on the graph where the probability that a certain edge is used for transmission is a function of the edge weight (reflecting how closely `connected' the corresponding vertices are). It is demonstrated that, if vertices with large degree tend to have large (small) weights on their edges and if the transmission probability increases with the edge weight, then it is easier (harder) for the epidemic to take off compared to a randomized epidemic with the same degree and weight distribution. A recipe for calculating the probability of a large outbreak in the epidemic and the size of such an outbreak is also given. Finally, the model is fitted to three empirical weighted networks of importance for the spread of contagious diseases and it is shown that R 0 can be substantially over- or underestimated if the correlation between degree and weight is not taken into account.
Graph drawing using tabu search coupled with path relinking.
Dib, Fadi K; Rodgers, Peter
2018-01-01
Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function's value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset.
Graph drawing using tabu search coupled with path relinking
Rodgers, Peter
2018-01-01
Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function’s value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset. PMID:29746576
Ramos, Thaysa Monteiro; Ramos-Oliveira, Thayanne Monteiro; Moretto, Simone Gonçalves; de Freitas, Patricia Moreira; Esteves-Oliveira, Marcella; de Paula Eduardo, Carlos
2014-03-01
The aim of this in vitro study was to evaluate the effect of different surface treatments (control, diamond bur, erbium-doped yttrium aluminum garnet (Er:YAG) laser, and erbium, chromium:yttrium-scandium-gallium-garnet (Er,Cr:YSGG) laser) on sound dentin surface morphology and on microtensile bond strength (μTBS). Sixteen dentin fragments were randomly divided into four groups (n = 4), and different surface treatments were analyzed by scanning electron microscopy. Ninety-six third molars were randomly divided into eight groups (n = 12) according to type of surface treatment and adhesive system: G1 = Control + Clearfil SE Bond (SE); G2 = Control + Single Bond (SB); G3 = diamond bur (DB) + SE; G4 = DB + SB, G5 = Er:YAG laser (2.94 μm, 60 mJ, 2 Hz, 0.12 W, 19.3 J/cm(2)) + SE; G6 = Er:YAG + SB, G7 = Er,Cr:YSGG laser (2.78 μm, 50 mJ, 30 Hz, 1.5 W, 4.5 J/cm(2)) + SE; and G8 = Er,Cr:YSGG + SB. Composite blocks were bonded to the samples, and after 24-h storage in distilled/deionized water (37 °C), stick-shaped samples were obtained and submitted to μTBS test. Bond strength values (in megapascal) were analyzed by two-way ANOVA and Tukey tests (α = 0.05). G1 (54.69 ± 7.8 MPa) showed the highest mean, which was statistically significantly higher than all the other groups (p < 0.05). For all treatments, SE showed higher bond strength than SB, except only for Er,Cr:YSGG treatment, in which the systems did not differ statistically from each other. Based on the irradiation parameters considered in this study, it can be concluded that Er:YAG and Er,Cr:YSGG irradiation presented lower values than the control group; however, their association with self-etching adhesive does not have a significantly negative effect on sound dentin (μTBS values of >20 MPa).
Combinatorial Statistics on Trees and Networks
2010-09-29
interaction graph is drawn from the Erdos- Renyi , G(n,p), where each edge is present independently with probability p. For this model we establish a double...special interest is the behavior of Gibbs sampling on the Erdos- Renyi random graph G{n, d/n), where each edge is chosen independently with...which have no counterparts in the coloring setting. Our proof presented here exploits in novel ways the local treelike structure of Erdos- Renyi
ERIC Educational Resources Information Center
Miller, Gloria I.; Jaciw, Andrew; Hoshiko, Brandon; Wei, Xin
2007-01-01
Texas Instruments has undertaken a research program with the goal of producing scientifically-based evidence of the effectiveness of graphing calculators and the "TI-Navigator"[TM] classroom networking system in the context of a professional development and curriculum framework. The program includes a two-year longitudinal study. The…
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Webster, Lynn R.; Lawler, John; Lindhardt, Karsten; Dayno, Jeffrey M.
2017-01-01
Objective. To compare the relative human abuse potential of intact and manipulated morphine abuse-deterrent, extended-release injection-molded tablets (morphine-ADER-IMT) with that of marketed morphine sulfate ER tablets Methods. This randomized, double-blind, triple-dummy, active- and placebo-controlled, 4-way crossover, single-center study included adult volunteers who were experienced, nondependent, recreational opioid users. Participants were randomized 1:1:1:1 to placebo, morphine-ADER-IMT (60 mg, intact), morphine-ADER-IMT (60 mg, manipulated), and morphine ER (60 mg, manipulated) and received 1 dose of each oral agent in crossover fashion, separated by ≥5 days. Pharmacodynamic and pharmacokinetic endpoints were assessed, including the primary endpoint of peak effect of Drug Liking (Emax) via Drug Liking Visual Analog Scale (VAS) score and the secondary endpoints of time to Emax (TEmax) and mean abuse quotient (AQ; a pharmacokinetic parameter associated with drug liking). Results. Thirty-eight participants completed the study. Median Drug Liking VAS Emax was significantly lower after treatment with manipulated morphine-ADER-IMT (67) compared with manipulated morphine ER (74; P = 0.007). TEmax was significantly shorter after treatment with manipulated morphine ER compared with intact (P < 0.0001) or manipulated (P = 0.004) morphine-ADER-IMT. Mean AQ was lower after treatment with intact (5.7) or manipulated (16.4) morphine-ADER-IMT compared with manipulated morphine ER (45.9). Conclusions. Manipulated morphine-ADER-IMT demonstrated significantly lower Drug Liking Emax compared with manipulated morphine ER when administered orally. Morphine-ADER-IMT would be an important new AD, ER morphine product with lower potential for unintentional misuse by chewing or intentional manipulation for oral abuse than currently available non-AD morphine ER products. PMID:27633773
DISTRIBUTION OF MAGNETIC BIPOLES ON THE SUN OVER THREE SOLAR CYCLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tlatov, Andrey G.; Vasil'eva, Valerya V.; Pevtsov, Alexei A., E-mail: tlatov@mail.r, E-mail: apevtsov@nso.ed
We employ synoptic full disk longitudinal magnetograms to study latitudinal distribution and orientation (tilt) of magnetic bipoles in the course of sunspot activity during cycles 21, 22, and 23. The data set includes daily observations from the National Solar Observatory at Kitt Peak (1975-2002) and Michelson Doppler Imager on board the Solar and Heliospheric Observatory (MDI/SOHO, 1996-2009). Bipole pairs were selected on the basis of proximity and flux balance of two neighboring flux elements of opposite polarity. Using the area of the bipoles, we have separated them into small quiet-Sun bipoles (QSBs), ephemeral regions (ERs), and active regions (ARs). Wemore » find that in their orientation, ERs and ARs follow Hale-Nicholson polarity rule. As expected, AR tilts follow Joy's law. ERs, however, show significantly larger tilts of opposite sign for a given hemisphere. QSBs are randomly oriented. Unlike ARs, ERs also show a preference in their orientation depending on the polarity of the large-scale magnetic field. These orientation properties may indicate that some ERs may form at or near the photosphere via the random encounter of opposite polarity elements, while others may originate in the convection zone at about the same location as ARs. The combined latitudinal distribution of ERs and ARs exhibits a clear presence of Spoerer's butterfly diagram (equatorward drift in the course of a solar cycle). ERs extend the ARs' 'wing' of the butterfly diagram to higher latitudes. This high latitude extension of ERs suggests an extended solar cycle with the first magnetic elements of the next cycle developing shortly after the maximum of the previous cycle. The polarity orientation and tilt of ERs may suggest the presence of poloidal fields of two configurations (new cycle and old cycle) in the convection zone at the declining phase of the sunspot cycle.« less
Detecting false positives in multielement designs: implications for brief assessments.
Bartlett, Sara M; Rapp, John T; Henrickson, Marissa L
2011-11-01
The authors assessed the extent to which multielement designs produced false positives using continuous duration recording (CDR) and interval recording with 10-s and 1-min interval sizes. Specifically, they created 6,000 graphs with multielement designs that varied in the number of data paths, and the number of data points per data path, using a random number generator. In Experiment 1, the authors visually analyzed the graphs for the occurrence of false positives. Results indicated that graphs depicting only two sessions for each condition (e.g., a control condition plotted with multiple test conditions) produced the highest percentage of false positives for CDR and interval recording with 10-s and 1-min intervals. Conversely, graphs with four or five sessions for each condition produced the lowest percentage of false positives for each method. In Experiment 2, they applied two new rules, which were intended to decrease false positives, to each graph that depicted a false positive in Experiment 1. Results showed that application of new rules decreased false positives to less than 5% for all of the graphs except for those with two data paths and two data points per data path. Implications for brief assessments are discussed.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure
NASA Astrophysics Data System (ADS)
Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure.
Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
A general stochastic model for studying time evolution of transition networks
NASA Astrophysics Data System (ADS)
Zhan, Choujun; Tse, Chi K.; Small, Michael
2016-12-01
We consider a class of complex networks whose nodes assume one of several possible states at any time and may change their states from time to time. Such networks represent practical networks of rumor spreading, disease spreading, language evolution, and so on. Here, we derive a model describing the dynamics of this kind of network and a simulation algorithm for studying the network evolutionary behavior. This model, derived at a microscopic level, can reveal the transition dynamics of every node. A numerical simulation is taken as an ;experiment; or ;realization; of the model. We use this model to study the disease propagation dynamics in four different prototypical networks, namely, the regular nearest-neighbor (RN) network, the classical Erdös-Renyí (ER) random graph, the Watts-Strogátz small-world (SW) network, and the Barabási-Albert (BA) scalefree network. We find that the disease propagation dynamics in these four networks generally have different properties but they do share some common features. Furthermore, we utilize the transition network model to predict user growth in the Facebook network. Simulation shows that our model agrees with the historical data. The study can provide a useful tool for a more thorough understanding of the dynamics networks.
Spierings, Egilius L H; Volkerts, Edmund R; Heitland, Ivo; Thomson, Heather
2014-02-01
The maximum plasma concentration (Cmax ) of oxymorphone extended release (ER) 20 mg and 40 mg is approximately 50% higher in fed than in fasted subjects, with most of the difference in area-under-the-curve (AUC) occurring in the first 4 hours post-dose. Hence, the US FDA recommends in the approved labeling that oxymorphone ER is taken at least 1 hour before or 2 hours after eating. In order to determine the potential impact on cognitive performance of the increased absorption of oxymorphone ER, fed versus fasting, we conducted a randomized, rater-blinded, crossover study in 30 opioid-tolerant subjects, using tests from the Cambridge Neuropsychological Test Automated Battery (CANTAB). The subjects randomly received 40 mg oxymorphone ER after a high-fat meal of approximately 1,010 kCal or after fasting for 8-12 hours, and were tested 1 hour and 3 hours post-dose. The CANTAB tests, Spatial Recognition Memory (SRM) and Spatial Working Memory (SWM), showed no statistically significant differences between the fed and fasting conditions. However, sustained attention, as measured by the Rapid Visual Information Processing (RVP) CANTAB test, showed a statistically significant interaction of fed versus fasting and post-dose time of testing (F[1,28] = 6.88, P = 0.01), suggesting that 40 mg oxymorphone ER after a high-fat meal versus fasting mitigates the learning effect in this particular cognition domain from 1 hour to 3 hours post-dose. Oxymorphone 40 mg ER affected cognitive performance similarly within 3 hours post-dose, whether given on an empty stomach or after a high-fat meal, suggesting that the effect of food on plasma concentration may not be relevant in the medication's impact on cognition. Wiley Periodicals, Inc.
Cengiz, Esra; Yilmaz, Hasan Guney
2016-03-01
The purpose of this randomized clinical study was to evaluate the efficiency of erbium, chromium-doped:yttrium, scandium, gallium, and garnet (Er,Cr:YSGG) laser irradiation combined with a resin-based tricalcium silicate material and calcium hydroxide in direct pulp capping for a 6-month follow-up period. A total of 60 teeth of 60 patients between the ages of 18 and 41 years were recruited for this study. Sixty permanent vital teeth without symptoms and radiographic changes were randomly assigned to the following 4 groups (n = 15): Gr CH, the exposed area was sealed with calcium hydroxide (CH) paste; Gr laser CH, the treated area was sealed with CH paste after Er,Cr:YSGG laser irradiation at an energy level of 0.5 W without water and with 45% air; Gr TheraCal, TheraCal LC (Bisco, Schaumburg, IL) was applied directly to the exposed pulp; and Gr Laser TheraCal, TheraCal LC was applied after irradiation with an Er,Cr:YSGG laser. At the 1-week and 1-, 3-, and 6-month recall examinations, the loss of vitality, spontaneous pain, reactions to thermal stimuli and percussion, and radiographic changes were considered as failure. The success rates in the CH and TheraCal groups were 73.3% and 66.6%, respectively. These rates did not reveal any significant difference. In both laser groups, success rates were 100%. The Er,Cr:YSGG laser-irradiated TheraCal and Er,Cr:YSGG laser-irradiated CH groups showed statistically higher success rates than the TheraCal and CH groups, respectively. Er,Cr:YSGG laser irradiation at 0.5 W without water combined with pulp capping agents can be recommended for direct pulp therapy. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Modeling and optimization of Quality of Service routing in Mobile Ad hoc Networks
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Fatemidokht, Hamideh; Balas, Valentina Emilia
2016-01-01
Mobile ad hoc networks (MANETs) are a group of mobile nodes that are connected without using a fixed infrastructure. In these networks, nodes communicate with each other by forming a single-hop or multi-hop network. To design effective mobile ad hoc networks, it is important to evaluate the performance of multi-hop paths. In this paper, we present a mathematical model for a routing protocol under energy consumption and packet delivery ratio of multi-hop paths. In this model, we use geometric random graphs rather than random graphs. Our proposed model finds effective paths that minimize the energy consumption and maximizes the packet delivery ratio of the network. Validation of the mathematical model is performed through simulation.
Dynamics of tax evasion through an epidemic-like model
NASA Astrophysics Data System (ADS)
Brum, Rafael M.; Crokidakis, Nuno
In this work, we study a model of tax evasion. We considered a fixed population divided in three compartments, namely honest tax payers, tax evaders and a third class between the mentioned two, which we call susceptibles to become evaders. The transitions among those compartments are ruled by probabilities, similarly to a model of epidemic spreading. These probabilities model social interactions among the individuals, as well as the government’s fiscalization. We simulate the model on fully-connected graphs, as well as on scale-free and random complex networks. For the fully-connected and random graph cases, we observe that the emergence of tax evaders in the population is associated with an active-absorbing nonequilibrium phase transition, that is absent in scale-free networks.
Vertices cannot be hidden from quantum spatial search for almost all random graphs
NASA Astrophysics Data System (ADS)
Glos, Adam; Krawiec, Aleksandra; Kukulski, Ryszard; Puchała, Zbigniew
2018-04-01
In this paper, we show that all nodes can be found optimally for almost all random Erdős-Rényi G(n,p) graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires p=ω (log ^8(n)/n), while the second requires p≥ (1+ɛ )log (n)/n, where ɛ >0. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the \\Vert \\cdot \\Vert _∞ norm. At the same time for p<(1-ɛ )log (n)/n, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.
Loi, Sherene; Dafni, Urania; Karlis, Dimitris; Polydoropoulou, Varvara; Young, Brandon M; Willis, Scooter; Long, Bradley; de Azambuja, Evandro; Sotiriou, Christos; Viale, Giuseppe; Rüschoff, Josef; Piccart, Martine J; Dowsett, Mitch; Michiels, Stefan; Leyland-Jones, Brian
2016-08-01
A number of studies suggest that response to antihuman epidermal growth factor receptor-2 (currently known as ERBB2, butreferred to asHER2 in this study) agents differs by estrogen receptor (ER) level status. The clinical relevance of this is unknown. To determine the magnitude of trastuzumab benefit according to quantitative levels of ER and HER2 in the HERceptin Adjuvant (HERA) trial. The HERA trial was an international, multicenter, randomized trial that included 5099 patients with early-stage HER2-positive breast cancer, randomized between 2001 and 2005 to receive either no trastuzumab or trastuzumab, after adjuvant chemotherapy. This is a secondary analysis of the HERA study. Local ER immunohistochemical (IHC) analyses, HER2 fluorescence in situ hybridization (FISH) ratio, and copy number results were available for 3037 patients (59.6%) randomized to observation and trastuzumab (1 or 2 years) (cohort 1). Transcript levels of ESR1 and HER2 genes were available for 615 patients (12.1%) (cohort 2). Patients were randomized to receive either no trastuzumab or 1 year vs 2 years of trastuzumab. Endocrine therapy was given to patients with hormone receptor-positive disease as per local guidelines. Disease-free survival (DFS) and overall survival (OS) were the primary and secondary end points in the intent-to-treat population (ITT). Analyses adjusting for crossover (censored and inverse probability weighted [IPW]) were also performed. Interactions among treatment, ER status, and HER2 amplification using predefined cutoffs were assessed in Cox proportional hazards regression models. Median follow-up time was 8 years. Levels of FISH and HER2 copy numbers were significantly higher in ER-negative patients (P < .001). In cohort 1, for DFS and OS, a significant treatment effect was found for all ER, IHC, and FISH levels, except for the ER-positive/HER2 low FISH ratio (≥2 to <5) group (DFS: 3-way ITT Pvalue for interaction = .07; censored = .02; IPW = .03; OS ITT Pvalue for interaction = .007; censored = .04; IPW = .03). In cohort 2, consistent with cohort 1, a significant predictive effect of the ESR1 gene for both end points was also observed (DFS Pvalue for interaction = .06; OS = .02), indicating that breast cancers with higher ESR1 levels also derive less benefit from trastuzumab. Patients with HER2-positive breast cancers that are ER-positive by IHC analyses with low FISH ratio (≥2 to <5), or with higher ESR1 levels derive significantly less benefit from adjuvant trastuzumab after chemotherapy. These data may explain heterogeneity in response to anti-HER2 agents in HER2-positive, ER-positive breast cancers as some may be more luminal-like than HER2 driven. clinicaltrials.gov Identifier: NCT00045032.
Pantaleon, Carmela; Iverson, Matthew; Smith, Michael D.; Kinzler, Eric R.; Aigner, Stefan
2018-01-01
Objective To investigate the pharmacokinetics (PK) of Morphine ARER, an extended-release (ER), abuse-deterrent formulation of morphine sulfate after oral and intranasal administration. Methods This randomized, double-blind, double-dummy, placebo-controlled, four-way crossover study assessed the PK of morphine and its active metabolite, M6G, from crushed intranasal Morphine ARER and intact oral Morphine ARER compared with crushed intranasal ER morphine following administration to nondependent, recreational opioid users. The correlation between morphine PK and the pharmacodynamic parameter of drug liking, a measure of abuse potential, was also evaluated. Results Mean maximum observed plasma concentration (Cmax) for morphine was lower with crushed intranasal Morphine ARER (26.2 ng/mL) and intact oral Morphine ARER (18.6 ng/mL), compared with crushed intranasal ER morphine (49.5 ng/mL). The time to Cmax (Tmax) was the same for intact oral and crushed intranasal Morphine ARER (1.6 hours) and longer for crushed intranasal morphine ER (1.1 hours). Higher mean maximum morphine Cmax, Tmax, and abuse quotient (Cmax/Tmax) were positively correlated with maximum effect for drug liking (R2 ≥ 0.9795). Conclusion These data suggest that Morphine ARER maintains its ER profile despite physical manipulation and intranasal administration, which may be predictive of a lower intranasal abuse potential compared with ER morphine.
Finding the Optimal Nets for Self-Folding Kirigami
NASA Astrophysics Data System (ADS)
Araújo, N. A. M.; da Costa, R. A.; Dorogovtsev, S. N.; Mendes, J. F. F.
2018-05-01
Three-dimensional shells can be synthesized from the spontaneous self-folding of two-dimensional templates of interconnected panels, called nets. However, some nets are more likely to self-fold into the desired shell under random movements. The optimal nets are the ones that maximize the number of vertex connections, i.e., vertices that have only two of its faces cut away from each other in the net. Previous methods for finding such nets are based on random search, and thus, they do not guarantee the optimal solution. Here, we propose a deterministic procedure. We map the connectivity of the shell into a shell graph, where the nodes and links of the graph represent the vertices and edges of the shell, respectively. Identifying the nets that maximize the number of vertex connections corresponds to finding the set of maximum leaf spanning trees of the shell graph. This method allows us not only to design the self-assembly of much larger shell structures but also to apply additional design criteria, as a complete catalog of the maximum leaf spanning trees is obtained.
Understanding spatial connectivity of individuals with non-uniform population density.
Wang, Pu; González, Marta C
2009-08-28
We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
NASA Astrophysics Data System (ADS)
Van Mieghem, P.; van de Bovenkamp, R.
2013-03-01
Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Krohn-Dale, Ivar; Bøe, Olav E; Enersen, Morten; Leknes, Knut N
2012-08-01
The objective of this randomized, controlled clinical trial was to compare the clinical and microbiological effects of pocket debridement using erbium-doped: yttrium, aluminium and garnet (Er:YAG) laser with conventional debridement in maintenance patients. Fifteen patients, all smokers, having at least four teeth with residual probing depth (PD) ≥ 5 mm were recruited. Two pockets in two jaw quadrants were randomly assigned to subgingival debridement using an Er:YAG laser (test) or ultrasonic scaler/curette (control) at 3-month intervals. Relative attachment level (RAL), PD, bleeding on probing and dental plaque were recorded at baseline and at 6 and 12 months. Microbiological subgingival samples were taken at the same time points and analysed using a checkerboard DNA-DNA hybridization technique. A significant decrease in PD took place in both treatments from baseline to 12 months (p < 0.01). In the control, the mean initial PD decreased from 5.4 to 4.0 mm at 12 months. For the test, a similar decrease occurred. No significant between-treatment differences were shown at any time point. The mean RAL showed no overall significant inter- or intra-treatment differences (p > 0.05). No significant between-treatment differences were observed in subgingival microbiological composition or total pathogens. The results failed to support that an Er:YAG laser may be superior to conventional debridement in the treatment of smokers with recurring chronic inflammation. This appears to be the first time that repeated Er-YAG laser instrumentation has been compared with mechanical instrumentation of periodontal sites with recurring chronic inflammation over a clinically relevant time period. © 2012 John Wiley & Sons A/S.
Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph
2014-07-01
distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more
The hypergraph regularity method and its applications
Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.
2005-01-01
Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821
Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph
NASA Astrophysics Data System (ADS)
Xue, Xiaofeng
2017-11-01
In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.
SpectralNET – an application for spectral graph analysis and visualization
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-01-01
Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170
SpectralNET--an application for spectral graph analysis and visualization.
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-10-19
Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.
Emotion regulation and mania risk: Differential responses to implicit and explicit cues to regulate.
Ajaya, Yatrika; Peckham, Andrew D; Johnson, Sheri L
2016-03-01
People prone to mania use emotion regulation (ER) strategies well when explicitly coached to do so in laboratory settings, but they find these strategies ineffective in daily life. We hypothesized that, compared with control participants, mania-prone people would show ER deficits when they received implicit, but not explicit, cues to use ER. Undergraduates (N = 66) completed the Hypomanic Personality Scale (HPS) and were randomly assigned to one of three experimental conditions: automatic ER (scrambled sentence primes), deliberate ER (verbal instructions), or control (no priming or instructions to use ER). Then, participants played a videogame designed to evoke anger. Emotion responses were measured with a multi-modal assessment of self-reported affect, psychophysiology, and facial expressions. Respiratory sinus arrhythmia (RSA) was used to index ER. The videogame effectively elicited subjective anger, angry facial expressions, and heart rate increases when keys malfunctioned. As hypothesized, persons who were more mania prone showed greater RSA increases in the deliberate ER condition than in the automatic or control conditions. One potential limitation is the use of an analog sample. Findings suggest that those at risk for mania require more explicit instruction to engage ER effectively. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gev, Tali; Rosenan, Ruthie; Golan, Ofer
2017-05-01
Emotion recognition (ER) and understanding deficits are characteristic of autism spectrum disorder (ASD). The Transporters (TT) animated series has shown promising results in teaching children with ASD to recognize emotions, with mixed findings about generalization and maintenance of effects. This study aimed to evaluate the unique role of TT and of parental support in the acquisition, generalization, and maintenance of acquired ER skills in children with ASD. 77 Israeli children with high functioning ASD, aged 4-7 were randomly assigned into four groups according to a 2 × 2 design of the factors Series (TT, control series) and Parental Support (with/without). Thirty typically developing children, matched to the ASD groups on mental age, were tested with no intervention. Participants' ER (on three generalization levels) and emotional vocabulary (EV) were tested pre and post 8 weeks of intervention, and at 3 months' follow-up. Compared to the control series, watching TT significantly improved children's ER skills at all generalization levels, with good skill maintenance. All groups improved equally on EV. The amount of parental support given, in the groups that had received it, contributed to the generalization and maintenance of ER skills. Autism severity negatively correlated with ER improvement. The current study provides evidence to the unique role of TT in ER skill acquisition, generalization, and maintenance in children with high functioning ASD. In addition, this study provides evidence for a successful cultural adaptation of TT to a non-English speaking culture. Autism Res 2017, 10: 993-1003. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Pattern formations and optimal packing.
Mityushev, Vladimir
2016-04-01
Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.
Breaking of Ensemble Equivalence in Networks
NASA Astrophysics Data System (ADS)
Squartini, Tiziano; de Mol, Joey; den Hollander, Frank; Garlaschelli, Diego
2015-12-01
It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.
Naming games in two-dimensional and small-world-connected random geometric networks.
Lu, Qiming; Korniss, G; Szymanski, B K
2008-01-01
We investigate a prototypical agent-based model, the naming game, on two-dimensional random geometric networks. The naming game [Baronchelli, J. Stat. Mech.: Theory Exp. (2006) P06014] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the naming games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.
Data and graph interpretation practices among preservice science teachers
NASA Astrophysics Data System (ADS)
Bowen, G. Michael; Roth, Wolff-Michael
2005-12-01
The interpretation of data and construction and interpretation of graphs are central practices in science, which, according to recent reform documents, science and mathematics teachers are expected to foster in their classrooms. However, are (preservice) science teachers prepared to teach inquiry with the purpose of transforming and analyzing data, and interpreting graphical representations? That is, are preservice science teachers prepared to teach data analysis and graph interpretation practices that scientists use by default in their everyday work? The present study was designed to answer these and related questions. We investigated the responses of preservice elementary and secondary science teachers to data and graph interpretation tasks. Our investigation shows that, despite considerable preparation, and for many, despite bachelor of science degrees, preservice teachers do not enact the (authentic) practices that scientists routinely do when asked to interpret data or graphs. Detailed analyses are provided of what data and graph interpretation practices actually were enacted. We conclude that traditional schooling emphasizes particular beliefs in the mathematical nature of the universe that make it difficult for many individuals to deal with data possessing the random variation found in measurements of natural phenomena. The results suggest that preservice teachers need more experience in engaging in data and graph interpretation practices originating in activities that provide the degree of variation in and complexity of data present in realistic investigations.
Magnetic moment arrangement in amorphous Fe 0.66Er 0.19B 0.15
NASA Astrophysics Data System (ADS)
Szymański, K.; Kalska, B.; Satuła, D.; Dobrzyński, L.; Broddefalk, A.; Wäppling, R.; Nordblad, P.
2002-11-01
Magnetization measurements and Mössbauer spectroscopy with and without a monochromatic circularly polarized Mössbauer source (MCPMS) have been performed in order to determine the magnetic properties of the amorphous alloy Fe 0.66Er 0.19B 0.15. The system is found to order ferrimagnetically at TC=330 K and to show a compensation temperature ( Tcomp) at 120 K. A reorientation of the magnetic moments of iron and erbium during sample cooling through the compensation point in magnetic field is clearly displayed in the MCPMS data. The orientation of the net magnetic moment is due to the orientation of Fe moments above Tcomp and to Er moments at low temperatures. The results are compatible with a model of predominantly antiferromagnetic Fe-Er coupling accompanied by random local anisotropy acting on the Er moments.
Yoon, Seonghae; Lee, Howard; Kim, Tae-Eun; Lee, SeungHwan; Chee, Dong-Hyun; Cho, Joo-Youn; Yu, Kyung-Sang; Jang, In-Jin
2014-01-01
This study was conducted to compare the oral bioavailability of an itopride extended-release (ER) formulation with that of the reference immediate-release (IR) formulation in the fasting state. The effect of food on the bioavailability of itopride ER was also assessed. A single-center, open-label, randomized, multiple-dose, three-treatment, three-sequence, crossover study was performed in 24 healthy male subjects, aged 22-48 years, who randomly received one of the following treatments for 4 days in each period: itopride 150 mg ER once daily under fasting or fed conditions, or itopride 50 mg IR three times daily in the fasting state. Steady-state pharmacokinetic parameters of itopride, including peak plasma concentration (Cmax) and area under the plasma concentration versus time curve over 24 hours after dosing (AUC(0-24h)), were determined by noncompartmental analysis. The geometric mean ratio of the pharmacokinetic parameters was derived using an analysis of variance model. A total of 24 healthy Korean subjects participated, 23 of whom completed the study. The geometric mean ratio and its 90% confidence interval of once-daily ER itopride versus IR itopride three times a day for AUC(0-24h) were contained within the conventional bioequivalence range of 0.80-1.25 (0.94 [0.88-1.01]), although Cmax was reached more slowly and was lower for itopride ER than for the IR formulation. Food delayed the time taken to reach Cmax for itopride ER, but AUC(0-24h) was not affected. There were no serious adverse events and both formulations were generally well tolerated. At steady state, once-daily itopride ER at 150 mg has a bioavailability comparable with that of itopride IR at 50 mg given three times a day under fasting conditions. Food delayed the absorption of itopride ER, with no marked change in its oral bioavailability.
Yoon, Seonghae; Lee, Howard; Kim, Tae-Eun; Lee, SeungHwan; Chee, Dong-Hyun; Cho, Joo-Youn; Yu, Kyung-Sang; Jang, In-Jin
2014-01-01
Background This study was conducted to compare the oral bioavailability of an itopride extended-release (ER) formulation with that of the reference immediate-release (IR) formulation in the fasting state. The effect of food on the bioavailability of itopride ER was also assessed. Methods A single-center, open-label, randomized, multiple-dose, three-treatment, three-sequence, crossover study was performed in 24 healthy male subjects, aged 22–48 years, who randomly received one of the following treatments for 4 days in each period: itopride 150 mg ER once daily under fasting or fed conditions, or itopride 50 mg IR three times daily in the fasting state. Steady-state pharmacokinetic parameters of itopride, including peak plasma concentration (Cmax) and area under the plasma concentration versus time curve over 24 hours after dosing (AUC0–24h), were determined by noncompartmental analysis. The geometric mean ratio of the pharmacokinetic parameters was derived using an analysis of variance model. Results A total of 24 healthy Korean subjects participated, 23 of whom completed the study. The geometric mean ratio and its 90% confidence interval of once-daily ER itopride versus IR itopride three times a day for AUC0–24h were contained within the conventional bioequivalence range of 0.80–1.25 (0.94 [0.88–1.01]), although Cmax was reached more slowly and was lower for itopride ER than for the IR formulation. Food delayed the time taken to reach Cmax for itopride ER, but AUC0–24h was not affected. There were no serious adverse events and both formulations were generally well tolerated. Conclusion At steady state, once-daily itopride ER at 150 mg has a bioavailability comparable with that of itopride IR at 50 mg given three times a day under fasting conditions. Food delayed the absorption of itopride ER, with no marked change in its oral bioavailability. PMID:24470753
Listing All Maximal Cliques in Sparse Graphs in Near-optimal Time
2011-01-01
523 10 Arabisopsis thaliana 1745 3098 71 12 Drosophila melanogaster 7282 24894 176 12 Homo Sapiens 9527 31182 308 12 Schizosaccharomyces pombe 2031...clusters of actors [6,14,28,40] and may be used as features in exponential random graph models for statistical analysis of social networks [17,19,20,44,49...29. R. Horaud and T. Skordas. Stereo correspondence through feature grouping and maximal cliques. IEEE Trans. Patt. An. Mach. Int. 11(11):1168–1180
Osman, Mai Abdel Raouf; Kassab, Ahmed Nazmi
2017-08-01
A verrucous epidermal nevus (VEN) is a skin disorder that has been treated using different treatment modalities with varying results. Ablative lasers such as carbon dioxide laser (CO 2 ) and erbium:yttrium-aluminum-garnet (Er:YAG) laser have been considered as the gold standard for the treatment of epidermal nevi. To evaluate and compare the efficacy, postoperative wound healing and side effects of pulsed CO 2 laser and Er:YAG laser for the treatment of verrucous epidermal nevi. Twenty patients with localized VEN were randomly divided into two groups. Group 1 was administered CO 2 laser and group 2 underwent Er:YAG laser treatment. A blinded physician evaluated the photographs and dermoscopic photomicrographs for the efficacy and possible side effects. All patients received one treatment session and were followed up over a 6-month period. Both lasers induced noticeable clinical improvement, but there were no significant differences between two lasers in treatment response, patient satisfaction, duration of erythema and side effects. The average time to re-epithelialization was 13.5 days with CO 2 and 7.9 days with Er:YAG laser (p< .0005). No scarring was observed in Er:YAG laser group and no lesional recurrence was detected in CO 2 laser group since treatment. Apart from re-epithelialization, both lasers showed equivalent outcomes with respect to treatment response, patient satisfaction, side effects and complications.
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2018-06-05
The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.
Biondo, Alessio Emanuele; Giarlotta, Alfio; Pluchino, Alessandro; Rapisarda, Andrea
2016-01-01
We present a graph-theoretic model of consumer choice, where final decisions are shown to be influenced by information and knowledge, in the form of individual awareness, discriminating ability, and perception of market structure. Building upon the distance-based Hotelling's differentiation idea, we describe the behavioral experience of several prototypes of consumers, who walk a hypothetical cognitive path in an attempt to maximize their satisfaction. Our simulations show that even consumers endowed with a small amount of information and knowledge may reach a very high level of utility. On the other hand, complete ignorance negatively affects the whole consumption process. In addition, rather unexpectedly, a random walk on the graph reveals to be a winning strategy, below a minimal threshold of information and knowledge.
Biondo, Alessio Emanuele; Giarlotta, Alfio; Pluchino, Alessandro; Rapisarda, Andrea
2016-01-01
We present a graph-theoretic model of consumer choice, where final decisions are shown to be influenced by information and knowledge, in the form of individual awareness, discriminating ability, and perception of market structure. Building upon the distance-based Hotelling’s differentiation idea, we describe the behavioral experience of several prototypes of consumers, who walk a hypothetical cognitive path in an attempt to maximize their satisfaction. Our simulations show that even consumers endowed with a small amount of information and knowledge may reach a very high level of utility. On the other hand, complete ignorance negatively affects the whole consumption process. In addition, rather unexpectedly, a random walk on the graph reveals to be a winning strategy, below a minimal threshold of information and knowledge. PMID:26784700
Social capital calculations in economic systems: Experimental study
NASA Astrophysics Data System (ADS)
Chepurov, E. G.; Berg, D. B.; Zvereva, O. M.; Nazarova, Yu. Yu.; Chekmarev, I. V.
2017-11-01
The paper describes the social capital study for a system where actors are engaged in an economic activity. The focus is on the analysis of communications structural parameters (transactions) between the actors. Comparison between transaction network graph structure and the structure of a random Bernoulli graph of the same dimension and density allows revealing specific structural features of the economic system under study. Structural analysis is based on SNA-methodology (SNA - Social Network Analysis). It is shown that structural parameter values of the graph formed by agent relationship links may well characterize different aspects of the social capital structure. The research advocates that it is useful to distinguish the difference between each agent social capital and the whole system social capital.
The Full Ward-Takahashi Identity for Colored Tensor Models
NASA Astrophysics Data System (ADS)
Pérez-Sánchez, Carlos I.
2018-03-01
Colored tensor models (CTM) is a random geometrical approach to quantum gravity. We scrutinize the structure of the connected correlation functions of general CTM-interactions and organize them by boundaries of Feynman graphs. For rank- D interactions including, but not restricted to, all melonic φ^4 -vertices—to wit, solely those quartic vertices that can lead to dominant spherical contributions in the large- N expansion—the aforementioned boundary graphs are shown to be precisely all (possibly disconnected) vertex-bipartite regularly edge- D-colored graphs. The concept of CTM-compatible boundary-graph automorphism is introduced and an auxiliary graph calculus is developed. With the aid of these constructs, certain U (∞)-invariance of the path integral measure is fully exploited in order to derive a strong Ward-Takahashi Identity for CTMs with a symmetry-breaking kinetic term. For the rank-3 φ^4 -theory, we get the exact integral-like equation for the 2-point function. Similarly, exact equations for higher multipoint functions can be readily obtained departing from this full Ward-Takahashi identity. Our results hold for some Group Field Theories as well. Altogether, our non-perturbative approach trades some graph theoretical methods for analytical ones. We believe that these tools can be extended to tensorial SYK-models.
NASA Astrophysics Data System (ADS)
Zhang, Yali; Wang, Jun
2017-09-01
In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
Educational network comparative analysis of small groups: Short- and long-term communications
NASA Astrophysics Data System (ADS)
Berg, D. B.; Zvereva, O. M.; Nazarova, Yu. Yu.; Chepurov, E. G.; Kokovin, A. V.; Ranyuk, S. V.
2017-11-01
The present study is devoted to the discussion of small group communication network structures. These communications were observed in student groups, where actors were united with a regular educational activity. The comparative analysis was carried out for networks of short-term (1 hour) and long-term (4 weeks) communications, it was based on seven structural parameters, and consisted of two stages. At the first stage, differences between the network graphs were examined, and the random corresponding Bernoulli graphs were built. At the second stage, revealed differences were compared. Calculations were performed using UCINET software framework. It was found out that networks of long-term and short-term communications are quite different: the structure of a short-term communication network is close to a random one, whereas the most of long-term communication network parameters differ from the corresponding random ones by more than 30%. This difference can be explained by strong "noisiness" of a short-term communication network, and the lack of social in it.
Galantamine-ER for the treatment of mild-to-moderate Alzheimer’s disease
Seltzer, Ben
2010-01-01
An extended release form of the cholinesterase inhibitor (ChEI) drug galantamine (galantamine-ER) was developed, chiefly to increase adherence to medication regimes in patients with mild-to-moderate Alzheimer’s disease (AD). Except for predicted differences in (Cmax) and tmax, comparable doses of once daily galantamine-ER and regular, immediate release galantamine, (galantamine-IR), are pharmacologically equivalent. A 24-week randomized, double-blind, placebo-and active-controlled, multicenter phase III trial, which compared galantamine-IR, galantamine-ER and placebo in subjects with mild to moderate AD (mini-mental state examination [MMSE] score range, 10 to 24) showed that both formulations of galantamine were significantly better than placebo in terms of cognition, although not with regard to global change. There was no difference in drug-related adverse events between galantamine-ER and galantamine-IR. Since its release onto the market galantamine-ER has enjoyed wide popularity and a recent surveillance study suggests that it has the highest 1-year persistence rate of all the ChEIs. PMID:20169037
Wang, Yu; Sun, Ying; Guo, Xin; Fu, Yao; Long, Jie; Wei, Cheng-Xi; Zhao, Ming
2018-06-01
Inhibiting endoplasmic reticulum stress (ERS)-induced apoptosis may be a new therapeutic target in cardiovascular diseases. Creatine phosphate disodium salt (CP) has been reported to have cardiovascular protective effect, but its effects on ERS are unknown. The aim of this study was to identify the mechanism by which CP exerts its cardioprotection in doxorubicin (Dox)-induced cardiomyocytes injury. In our study, neonatal rats cardiomyocytes (NRC) was randomly divided into control group, model group, and treatment group. The cell viability and apoptosis were detected. grp78, grp94, and calumenin of the each group were monitored. To investigate the role of calumenin, Dox-induced ERS was compared in control and down-regulated calumenin cardiomyocytes. Our results showed that CP decreased Dox-induced apoptosis and relieved ERS. We found calumenin increased in Dox-induced apoptosis with CP. ERS effector C/EBP homologous protein was down-regulated by CP and it was influenced by calumenin. CP could protect NRC by inhibiting ERS, this mechanisms may be associated with its increasing of calumenin.
Linear game non-contextuality and Bell inequalities—a graph-theoretic approach
NASA Astrophysics Data System (ADS)
Rosicka, M.; Ramanathan, R.; Gnaciński, P.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Severini, S.
2016-04-01
We study the classical and quantum values of a class of one- and two-party unique games, that generalizes the well-known XOR games to the case of non-binary outcomes. In the bipartite case the generalized XOR (XOR-d) games we study are a subclass of the well-known linear games. We introduce a ‘constraint graph’ associated to such a game, with the constraints defining the game represented by an edge-coloring of the graph. We use the graph-theoretic characterization to relate the task of finding equivalent games to the notion of signed graphs and switching equivalence from graph theory. We relate the problem of computing the classical value of single-party anti-correlation XOR games to finding the edge bipartization number of a graph, which is known to be MaxSNP hard, and connect the computation of the classical value of XOR-d games to the identification of specific cycles in the graph. We construct an orthogonality graph of the game from the constraint graph and study its Lovász theta number as a general upper bound on the quantum value even in the case of single-party contextual XOR-d games. XOR-d games possess appealing properties for use in device-independent applications such as randomness of the local correlated outcomes in the optimal quantum strategy. We study the possibility of obtaining quantum algebraic violation of these games, and show that no finite XOR-d game possesses the property of pseudo-telepathy leaving the frequently used chained Bell inequalities as the natural candidates for such applications. We also show this lack of pseudo-telepathy for multi-party XOR-type inequalities involving two-body correlation functions.
The nodal count {0,1,2,3,…} implies the graph is a tree
Band, Ram
2014-01-01
Sturm's oscillation theorem states that the nth eigenfunction of a Sturm–Liouville operator on the interval has n−1 zeros (nodes) (Sturm 1836 J. Math. Pures Appl. 1, 106–186; 373–444). This result was generalized for all metric tree graphs (Pokornyĭ et al. 1996 Mat. Zametki 60, 468–470 (doi:10.1007/BF02320380); Schapotschnikow 2006 Waves Random Complex Media 16, 167–178 (doi:10.1080/1745530600702535)) and an analogous theorem was proved for discrete tree graphs (Berkolaiko 2007 Commun. Math. Phys. 278, 803–819 (doi:10.1007/S00220-007-0391-3); Dhar & Ramaswamy 1985 Phys. Rev. Lett. 54, 1346–1349 (doi:10.1103/PhysRevLett.54.1346); Fiedler 1975 Czechoslovak Math. J. 25, 607–618). We prove the converse theorems for both discrete and metric graphs. Namely if for all n, the nth eigenfunction of the graph has n−1 zeros, then the graph is a tree. Our proofs use a recently obtained connection between the graph's nodal count and the magnetic stability of its eigenvalues (Berkolaiko 2013 Anal. PDE 6, 1213–1233 (doi:10.2140/apde.2013.6.1213); Berkolaiko & Weyand 2014 Phil. Trans. R. Soc. A 372, 20120522 (doi:10.1098/rsta.2012.0522); Colin de Verdière 2013 Anal. PDE 6, 1235–1242 (doi:10.2140/apde.2013.6.1235)). In the course of the proof, we show that it is not possible for all (or even almost all, in the metric case) the eigenvalues to exhibit a diamagnetic behaviour. In addition, we develop a notion of ‘discretized’ versions of a metric graph and prove that their nodal counts are related to those of the metric graph. PMID:24344337
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cevallos, F. Alex; Stolze, Karoline; Cava, Robert J.
The single crystal growth, structure, and basic magnetic properties of ErMgGaO 4 are reported. The structure consists of triangular layers of magnetic ErO 6 octahedra separated by a double layer of randomly occupied non-magnetic (Ga,Mg)O 5 bipyramids. The Er atoms are positionally disordered. Magnetic measurements parallel and perpendicular to the c axis of a single crystal reveal dominantly antiferromagnetic interactions, with a small degree of magnetic anisotropy. A weighted average of the directional data suggests an antiferromagnetic Curie Weiss temperature of approximately -30 K. Below 10 K the temperature dependences of the inverse susceptibilities in the in-plane and perpendicular-to planemore » directions are parallel, indicative of an isotropic magnetic moment at low temperatures. In conclusion, no sign of magnetic ordering is observed above 1.8 K, suggesting that ErMgGaO 4 is a geometrically frustrated magnet.« less
Cevallos, F. Alex; Stolze, Karoline; Cava, Robert J.
2018-03-23
The single crystal growth, structure, and basic magnetic properties of ErMgGaO 4 are reported. The structure consists of triangular layers of magnetic ErO 6 octahedra separated by a double layer of randomly occupied non-magnetic (Ga,Mg)O 5 bipyramids. The Er atoms are positionally disordered. Magnetic measurements parallel and perpendicular to the c axis of a single crystal reveal dominantly antiferromagnetic interactions, with a small degree of magnetic anisotropy. A weighted average of the directional data suggests an antiferromagnetic Curie Weiss temperature of approximately -30 K. Below 10 K the temperature dependences of the inverse susceptibilities in the in-plane and perpendicular-to planemore » directions are parallel, indicative of an isotropic magnetic moment at low temperatures. In conclusion, no sign of magnetic ordering is observed above 1.8 K, suggesting that ErMgGaO 4 is a geometrically frustrated magnet.« less
Fruit and vegetable intake and risk of breast cancer by hormone receptor status.
Jung, Seungyoun; Spiegelman, Donna; Baglietto, Laura; Bernstein, Leslie; Boggs, Deborah A; van den Brandt, Piet A; Buring, Julie E; Cerhan, James R; Gaudet, Mia M; Giles, Graham G; Goodman, Gary; Hakansson, Niclas; Hankinson, Susan E; Helzlsouer, Kathy; Horn-Ross, Pamela L; Inoue, Manami; Krogh, Vittorio; Lof, Marie; McCullough, Marjorie L; Miller, Anthony B; Neuhouser, Marian L; Palmer, Julie R; Park, Yikyung; Robien, Kim; Rohan, Thomas E; Scarmo, Stephanie; Schairer, Catherine; Schouten, Leo J; Shikany, James M; Sieri, Sabina; Tsugane, Schoichiro; Visvanathan, Kala; Weiderpass, Elisabete; Willett, Walter C; Wolk, Alicja; Zeleniuch-Jacquotte, Anne; Zhang, Shumin M; Zhang, Xuehong; Ziegler, Regina G; Smith-Warner, Stephanie A
2013-02-06
Estrogen receptor-negative (ER(-)) breast cancer has few known or modifiable risk factors. Because ER(-) tumors account for only 15% to 20% of breast cancers, large pooled analyses are necessary to evaluate precisely the suspected inverse association between fruit and vegetable intake and risk of ER(-) breast cancer. Among 993 466 women followed for 11 to 20 years in 20 cohort studies, we documented 19 869 estrogen receptor positive (ER(+)) and 4821 ER(-) breast cancers. We calculated study-specific multivariable relative risks (RRs) and 95% confidence intervals (CIs) using Cox proportional hazards regression analyses and then combined them using a random-effects model. All statistical tests were two-sided. Total fruit and vegetable intake was statistically significantly inversely associated with risk of ER(-) breast cancer but not with risk of breast cancer overall or of ER(+) tumors. The inverse association for ER(-) tumors was observed primarily for vegetable consumption. The pooled relative risks comparing the highest vs lowest quintile of total vegetable consumption were 0.82 (95% CI = 0.74 to 0.90) for ER(-) breast cancer and 1.04 (95% CI = 0.97 to 1.11) for ER(+) breast cancer (P (common-effects) by ER status < .001). Total fruit consumption was non-statistically significantly associated with risk of ER(-) breast cancer (pooled multivariable RR comparing the highest vs lowest quintile = 0.94, 95% CI = 0.85 to 1.04). We observed no association between total fruit and vegetable intake and risk of overall breast cancer. However, vegetable consumption was inversely associated with risk of ER(-) breast cancer in our large pooled analyses.
Topological structure of dictionary graphs
NASA Astrophysics Data System (ADS)
Fukś, Henryk; Krzemiński, Mark
2009-09-01
We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers—that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 103 and 104. A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Learning molecular energies using localized graph kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Long-Term Effectiveness and Safety of Dexmethylphenidate Extended-Release Capsules in Adult ADHD
ERIC Educational Resources Information Center
Adler, Lenard A.; Spencer, Thomas; McGough, James J.; Jiang, Hai; Muniz, Rafael
2009-01-01
Objective: This study evaluates dexmethylphenidate extended release (d-MPH-ER) in adults with ADHD. Method: Following a 5-week, randomized, controlled, fixed-dose study of d-MPH-ER 20 to 40 mg/d, 170 adults entered a 6-month open-label extension (OLE) to assess long-term safety, with flexible dosing of 20 to 40 mg/d. Exploratory effectiveness…
Carling, Cheryl L L; Kristoffersen, Doris Tove; Flottorp, Signe; Fretheim, Atle; Oxman, Andrew D; Schünemann, Holger J; Akl, Elie A; Herrin, Jeph; MacKenzie, Thomas D; Montori, Victor M
2009-08-01
We conducted an Internet-based randomized trial comparing four graphical displays of the benefits of antibiotics for people with sore throat who must decide whether to go to the doctor to seek treatment. Our objective was to determine which display resulted in choices most consistent with participants' values. This was the first of a series of televised trials undertaken in cooperation with the Norwegian Broadcasting Company. We recruited adult volunteers in Norway through a nationally televised weekly health program. Participants went to our Web site and rated the relative importance of the consequences of treatment using visual analogue scales (VAS). They viewed the graphical display (or no information) to which they were randomized and were asked to decide whether to go to the doctor for an antibiotic prescription. We compared four presentations: face icons (happy/sad) or a bar graph showing the proportion of people with symptoms on day three with and without treatment, a bar graph of the average duration of symptoms, and a bar graph of proportion with symptoms on both days three and seven. Before completing the study, all participants were shown all the displays and detailed patient information about the treatment of sore throat and were asked to decide again. We calculated a relative importance score (RIS) by subtracting the VAS scores for the undesirable consequences of antibiotics from the VAS score for the benefit of symptom relief. We used logistic regression to determine the association between participants' RIS and their choice. 1,760 participants completed the study. There were statistically significant differences in the likelihood of choosing to go to the doctor in relation to different values (RIS). Of the four presentations, the bar graph of duration of symptoms resulted in decisions that were most consistent with the more fully informed second decision. Most participants also preferred this presentation (38%) and found it easiest to understand (37%). Participants shown the other three presentations were more likely to decide to go to the doctor based on their first decision than everyone based on the second decision. Participants preferred the graph using faces the least (14.4%). For decisions about going to the doctor to get antibiotics for sore throat, treatment effects presented by a bar graph showing the duration of symptoms helped people make decisions more consistent with their values than treatment effects presented as graphical displays of proportions of people with sore throat following treatment. ISRCTN58507086.
Laurora, Irene; An, Robert
2016-01-01
To evaluate the efficacy of a novel formulation of extended-release/immediate-release (ER) naproxen sodium over 24 h in a dental pain model. Two randomized, double-blind, placebo-controlled trials in moderate to severe pain after extraction of one or two impacted third molars (at least one partial mandibular bony impaction). Treatment comprised oral ER naproxen sodium 660 mg (single dose), placebo (both studies) or immediate-release (IR) naproxen sodium 220 mg tid (study 2). Primary efficacy endpoint: 24-h summed pain intensity difference (SPID). Secondary variables included total pain relief (TOTPAR), use of rescue medication. All treatment-emergent adverse events were recorded. NCT00720057 (study 1), NCT01389284 (study 2). Primary efficacy analyses: pain intensity was significantly lower over 24 h with ER naproxen sodium vs. placebo (p < 0.001), with significant relief from 15 min (study 2). In study 2, ER naproxen sodium was non-inferior to IR naproxen sodium, reducing pain intensity to a comparable extent over 24 h. TOTPAR was significantly greater with ER and IR naproxen sodium vs. placebo at all time points, with generally comparable differences between active treatments. Significantly more placebo patients required rescue medication vs. ER and IR naproxen sodium from 2-24 h post-dose. Once daily ER naproxen sodium was generally safe and well tolerated, with a similar safety profile to IR naproxen sodium tid. The studies were single dose, with limited ability to assess efficacy or safety of multiple doses over time. As the imputed pain score meant that estimated treatment differences may have been biased in favor of ER naproxen sodium, a post hoc analysis evaluated the robustness of the results for pain relief. A single dose of ER naproxen sodium 660 mg significantly reduced moderate to severe dental pain vs. placebo and was comparable to IR naproxen sodium 220 mg tid. Significant pain relief was experienced from 15 min and sustained over 24 h, resulting in a reduced need for rescue medication. ER naproxen sodium 660 mg once daily is a convenient and effective therapy providing 24 h relief of pain.
Finding Maximum Cliques on the D-Wave Quantum Annealer
Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg; ...
2018-05-03
This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less
Information extraction and knowledge graph construction from geoscience literature
NASA Astrophysics Data System (ADS)
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen
2018-03-01
Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.
Finding Maximum Cliques on the D-Wave Quantum Annealer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg
This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
1 / n Expansion for the Number of Matchings on Regular Graphs and Monomer-Dimer Entropy
NASA Astrophysics Data System (ADS)
Pernici, Mario
2017-08-01
Using a 1 / n expansion, that is an expansion in descending powers of n, for the number of matchings in regular graphs with 2 n vertices, we study the monomer-dimer entropy for two classes of graphs. We study the difference between the extensive monomer-dimer entropy of a random r-regular graph G (bipartite or not) with 2 n vertices and the average extensive entropy of r-regular graphs with 2 n vertices, in the limit n → ∞. We find a series expansion for it in the numbers of cycles; with probability 1 it converges for dimer density p < 1 and, for G bipartite, it diverges as |ln(1-p)| for p → 1. In the case of regular lattices, we similarly expand the difference between the specific monomer-dimer entropy on a lattice and the one on the Bethe lattice; we write down its Taylor expansion in powers of p through the order 10, expressed in terms of the number of totally reducible walks which are not tree-like. We prove through order 6 that its expansion coefficients in powers of p are non-negative.
Consensus pursuit of heterogeneous multi-agent systems under a directed acyclic graph
NASA Astrophysics Data System (ADS)
Yan, Jing; Guan, Xin-Ping; Luo, Xiao-Yuan
2011-04-01
This paper is concerned with the cooperative target pursuit problem by multiple agents based on directed acyclic graph. The target appears at a random location and moves only when sensed by the agents, and agents will pursue the target once they detect its existence. Since the ability of each agent may be different, we consider the heterogeneous multi-agent systems. According to the topology of the multi-agent systems, a novel consensus-based control law is proposed, where the target and agents are modeled as a leader and followers, respectively. Based on Mason's rule and signal flow graph analysis, the convergence conditions are provided to show that the agents can catch the target in a finite time. Finally, simulation studies are provided to verify the effectiveness of the proposed approach.
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs
NASA Astrophysics Data System (ADS)
Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa
2016-11-01
We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.
Theory of rumour spreading in complex social networks
NASA Astrophysics Data System (ADS)
Nekovee, M.; Moreno, Y.; Bianconi, G.; Marsili, M.
2007-01-01
We introduce a general stochastic model for the spread of rumours, and derive mean-field equations that describe the dynamics of the model on complex social networks (in particular, those mediated by the Internet). We use analytical and numerical solutions of these equations to examine the threshold behaviour and dynamics of the model on several models of such networks: random graphs, uncorrelated scale-free networks and scale-free networks with assortative degree correlations. We show that in both homogeneous networks and random graphs the model exhibits a critical threshold in the rumour spreading rate below which a rumour cannot propagate in the system. In the case of scale-free networks, on the other hand, this threshold becomes vanishingly small in the limit of infinite system size. We find that the initial rate at which a rumour spreads is much higher in scale-free networks than in random graphs, and that the rate at which the spreading proceeds on scale-free networks is further increased when assortative degree correlations are introduced. The impact of degree correlations on the final fraction of nodes that ever hears a rumour, however, depends on the interplay between network topology and the rumour spreading rate. Our results show that scale-free social networks are prone to the spreading of rumours, just as they are to the spreading of infections. They are relevant to the spreading dynamics of chain emails, viral advertising and large-scale information dissemination algorithms on the Internet.
Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe
2017-01-01
The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual's subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents.
Motifs in triadic random graphs based on Steiner triple systems
NASA Astrophysics Data System (ADS)
Winkler, Marco; Reichardt, Jörg
2013-08-01
Conventionally, pairwise relationships between nodes are considered to be the fundamental building blocks of complex networks. However, over the last decade, the overabundance of certain subnetwork patterns, i.e., the so-called motifs, has attracted much attention. It has been hypothesized that these motifs, instead of links, serve as the building blocks of network structures. Although the relation between a network's topology and the general properties of the system, such as its function, its robustness against perturbations, or its efficiency in spreading information, is the central theme of network science, there is still a lack of sound generative models needed for testing the functional role of subgraph motifs. Our work aims to overcome this limitation. We employ the framework of exponential random graph models (ERGMs) to define models based on triadic substructures. The fact that only a small portion of triads can actually be set independently poses a challenge for the formulation of such models. To overcome this obstacle, we use Steiner triple systems (STSs). These are partitions of sets of nodes into pair-disjoint triads, which thus can be specified independently. Combining the concepts of ERGMs and STSs, we suggest generative models capable of generating ensembles of networks with nontrivial triadic Z-score profiles. Further, we discover inevitable correlations between the abundance of triad patterns, which occur solely for statistical reasons and need to be taken into account when discussing the functional implications of motif statistics. Moreover, we calculate the degree distributions of our triadic random graphs analytically.
Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe
2017-01-01
The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual’s subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents. PMID:28450845
Lofwall, Michelle R.; Babalonis, Shanna; Nuzzo, Paul A.; Siegel, Anthony; Campbell, Charles; Walsh, Sharon L.
2013-01-01
Background Tramadol is an atypical analgesic with monoamine and modest mu opioid agonist activity. The purpose of this study was to evaluate: 1) the efficacy of extended-release (ER) tramadol in treating prescription opioid withdrawal and 2) whether cessation of ER tramadol produces opioid withdrawal. Methods Prescription opioid users with current opioid dependence and observed withdrawal participated in this inpatient, two-phase double blind, randomized placebo-controlled trial. In Phase 1 (days 1-7), participants were randomly assigned to matched oral placebo or ER tramadol (200 or 600 mg daily). In Phase 2 (days 8-13), all participants underwent double blind crossover to placebo. Breakthrough withdrawal medications were available for all subjects. Enrollment continued until 12 completers/group was achieved. Results Use of breakthrough withdrawal medication differed significantly (p<0.05) among groups in both phases; the 200 mg group received the least amount in Phase 1, and the 600 mg group received the most in both phases. In Phase 1, tramadol 200 mg produced significantly lower peak ratings than placebo on ratings of insomnia, lacrimation, muscular tension, and sneezing. Only tramadol 600 mg produced miosis in Phase 1. In Phase 2, tramadol 600 mg produced higher peak ratings of rhinorrhea, irritable, depressed, heavy/sluggish, and hot/cold flashes than placebo. There were no serious adverse events and no signal of abuse liability for tramadol. Conclusions ER tramadol 200 mg modestly attenuated opioid withdrawal. Mild opioid withdrawal occurred after cessation of treatment with 600 mg tramadol. These data support the continued investigation of tramadol as a treatment for opioid withdrawal. PMID:23755929
NASA Astrophysics Data System (ADS)
Perugini, G.; Ricci-Tersenghi, F.
2018-01-01
We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-05-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-06-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Karsai, Syrus; Czarnecka, Agnieszka; Jünger, Michael; Raulin, Christian
2010-02-01
Ablative fractional lasers were introduced for treating facial rhytides in an attempt to achieve results comparable to traditional ablative resurfacing but with fewer side effects. However, there is conflicting evidence on how well this goal has generally been achieved as well as on the comparative value of fractional CO(2) and Er:YAG lasers. The present study compares these modalities in a randomized controlled double-blind split-face study design. Twenty-eight patients were enrolled and completed the entire study. Patients were randomly assigned to receive a single treatment on each side of the peri-orbital region, one with a fractional CO(2) and one with a fractional Er:YAG laser. The evaluation included the profilometric measurement of wrinkle depth, the Fitzpatrick wrinkle score (both before and 3 months after treatment) as well as the assessment of side effects and patient satisfaction (1, 3, 6 days and 3 months after treatment). Both modalities showed a roughly equivalent effect. Wrinkle depth and Fitzpatrick score were reduced by approximately 20% and 10%, respectively, with no appreciable difference between lasers. Side effects and discomfort were slightly more pronounced after Er:YAG treatment in the first few days, but in the later course there were more complaints following CO(2) laser treatment. Patient satisfaction was fair and the majority of patients would have undergone the treatment again without a clear preference for either method. According to the present study, a single ablative fractional treatment session has an appreciable yet limited effect on peri-orbital rhytides. When fractional CO(2) and Er:YAG lasers are used in such a manner that there are comparable post-operative healing periods, comparable cosmetic improvement occurs. Multiple sessions may be required for full effect, which cancels out the proposed advantage of fractional methods, that is, fewer side effects and less down time.
Webster, Lynn R.; Smith, Michael D.; Lawler, John; Lindhardt, Karsten; Dayno, Jeffrey M.
2017-01-01
Abstract Objective. To compare the relative human abuse potential after insufflation of manipulated morphine abuse-deterrent, extended-release injection-molded tablets (morphine-ADER-IMT) with that of marketed morphine ER tablets. Methods. A randomized, double-blind, double-dummy, active- and placebo-controlled five-way crossover study was performed with adult volunteers who were experienced, nondependent, recreational opioid users. After intranasal (IN) administration of manipulated high-volume (HV) morphine-ADER-IMT (60 mg), participants were randomized (1:1:1:1) to receive IN manipulated low-volume (LV) morphine ER (60 mg), IN manipulated LV morphine-ADER-IMT, intact oral morphine-ADER-IMT (60 mg), and placebo in crossover fashion. Pharmacodynamic and pharmacokinetic assessments included peak effect of drug liking (Emax; primary endpoint) using drug liking visual analog scale (VAS) score, Emax using overall drug liking, and take drug again (TDA) VASs scores, and mean abuse quotient (AQ), a pharmacokinetic parameter associated with drug liking. Results. Forty-six participants completed the study. After insufflation of HV morphine-ADER-IMT and LV morphine-ADER-IMT, drug liking Emax was significantly lower (P < 0.0001) compared with IN morphine ER. Overall drug liking and TDA Emax values were significantly lower (P < 0.0001) after insufflation of HV morphine-ADER-IMT and LV morphine-ADER-IMT compared with IN morphine ER. Mean AQ was lower after insufflation of HV (9.2) and LV (2.3) morphine-ADER-IMT or ingestion of oral morphine-ADER-IMT (5.5) compared with insufflation of LV morphine ER (37.2). Conclusions. All drug liking, take drug again, and abuse quotient endpoints support a significantly lower abuse potential with insufflation of manipulated morphine-ADER-IMT compared with manipulated and insufflated non-AD ER morphine. PMID:27651510
ER stress and ER stress-induced apoptosis are activated in gastric SMCs in diabetic rats
Chen, Xia; Fu, Xiang-Sheng; Li, Chang-Ping; Zhao, Hong-Xian
2014-01-01
AIM: To investigate the gastric muscle injury caused by endoplasmic reticulum (ER) stress in rats with diabetic gastroparesis. METHODS: Forty rats were randomly divided into two groups: a control group and a diabetic group. Diabetes was induced by intraperitoneal injection of 60 mg/kg of streptozotocin. Gastric emptying was determined at the 4th and 12th week. The ultrastructural changes in gastric smooth muscle cells (SMCs) were investigated by transmission electron microscopy. TdT-mediated dUTP nick end labeling (TUNEL) assay was performed to assess apoptosis of SMCs. Expression of the ER stress marker, glucose-regulated protein 78 (GRP78), and the ER-specific apoptosis mediator, caspase-12 protein, was determined by immunohistochemistry. RESULTS: Gastric emptying was significantly lower in the diabetic rats than in the control rats at the 12th wk (40.71% ± 2.50%, control rats vs 54.65% ± 5.22%, diabetic rats; P < 0.05). Swollen and distended ER with an irregular shape was observed in gastric SMCs in diabetic rats. Apoptosis of gastric SMCs increased in the diabetic rats in addition to increased expression of GRP78 and caspase-12 proteins. CONCLUSION: ER stress and ER stress-mediated apoptosis are activated in gastric SMCs in diabetic rats with gastroparesis. PMID:25009401
Dynamics of Nearest-Neighbour Competitions on Graphs
NASA Astrophysics Data System (ADS)
Rador, Tonguç
2017-10-01
Considering a collection of agents representing the vertices of a graph endowed with integer points, we study the asymptotic dynamics of the rate of the increase of their points according to a very simple rule: we randomly pick an an edge from the graph which unambiguously defines two agents we give a point the the agent with larger point with probability p and to the lagger with probability q such that p+q=1. The model we present is the most general version of the nearest-neighbour competition model introduced by Ben-Naim, Vazquez and Redner. We show that the model combines aspects of hyperbolic partial differential equations—as that of a conservation law—graph colouring and hyperplane arrangements. We discuss the properties of the model for general graphs but we confine in depth study to d-dimensional tori. We present a detailed study for the ring graph, which includes a chemical potential approximation to calculate all its statistics that gives rather accurate results. The two-dimensional torus, not studied in depth as the ring, is shown to possess critical behaviour in that the asymptotic speeds arrange themselves in two-coloured islands separated by borders of three other colours and the size of the islands obey power law distribution. We also show that in the large d limit the d-dimensional torus shows inverse sine law for the distribution of asymptotic speeds.
Analyzing cross-college course enrollments via contextual graph mining
Liu, Xiaozhong; Chen, Yan
2017-01-01
The ability to predict what courses a student may enroll in the coming semester plays a pivotal role in the allocation of learning resources, which is a hot topic in the domain of educational data mining. In this study, we propose an innovative approach to characterize students’ cross-college course enrollments by leveraging a novel contextual graph. Specifically, different kinds of variables, such as students, courses, colleges and diplomas, as well as various types of variable relations, are utilized to depict the context of each variable, and then a representation learning algorithm node2vec is applied to extracting sophisticated graph-based features for the enrollment analysis. In this manner, the relations between any pair of variables can be measured quantitatively, which enables the variable type to transform from nominal to ratio. These graph-based features are examined by the random forest algorithm, and experiments on 24,663 students, 1,674 courses and 417,590 enrollment records demonstrate that the contextual graph can successfully improve analyzing the cross-college course enrollments, where three of the graph-based features have significantly stronger impacts on prediction accuracy than the others. Besides, the empirical results also indicate that the student’s course preference is the most important factor in predicting future course enrollments, which is consistent to the previous studies that acknowledge the course interest is a key point for course recommendations. PMID:29186171
Analyzing cross-college course enrollments via contextual graph mining.
Wang, Yongzhen; Liu, Xiaozhong; Chen, Yan
2017-01-01
The ability to predict what courses a student may enroll in the coming semester plays a pivotal role in the allocation of learning resources, which is a hot topic in the domain of educational data mining. In this study, we propose an innovative approach to characterize students' cross-college course enrollments by leveraging a novel contextual graph. Specifically, different kinds of variables, such as students, courses, colleges and diplomas, as well as various types of variable relations, are utilized to depict the context of each variable, and then a representation learning algorithm node2vec is applied to extracting sophisticated graph-based features for the enrollment analysis. In this manner, the relations between any pair of variables can be measured quantitatively, which enables the variable type to transform from nominal to ratio. These graph-based features are examined by the random forest algorithm, and experiments on 24,663 students, 1,674 courses and 417,590 enrollment records demonstrate that the contextual graph can successfully improve analyzing the cross-college course enrollments, where three of the graph-based features have significantly stronger impacts on prediction accuracy than the others. Besides, the empirical results also indicate that the student's course preference is the most important factor in predicting future course enrollments, which is consistent to the previous studies that acknowledge the course interest is a key point for course recommendations.
Solving a Hamiltonian Path Problem with a bacterial computer
Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T
2009-01-01
Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof-of-concept experiment demonstrates that bacterial computing is a new way to address NP-complete problems using the inherent advantages of genetic systems. The results of our experiments also validate synthetic biology as a valuable approach to biological engineering. We designed and constructed basic parts, devices, and systems using synthetic biology principles of standardization and abstraction. PMID:19630940
Finite plateau in spectral gap of polychromatic constrained random networks
NASA Astrophysics Data System (ADS)
Avetisov, V.; Gorsky, A.; Nechaev, S.; Valba, O.
2017-12-01
We consider critical behavior in the ensemble of polychromatic Erdős-Rényi networks and regular random graphs, where network vertices are painted in different colors. The links can be randomly removed and added to the network subject to the condition of the vertex degree conservation. In these constrained graphs we run the Metropolis procedure, which favors the connected unicolor triads of nodes. Changing the chemical potential, μ , of such triads, for some wide region of μ , we find the formation of a finite plateau in the number of intercolor links, which exactly matches the finite plateau in the network algebraic connectivity (the value of the first nonvanishing eigenvalue of the Laplacian matrix, λ2). We claim that at the plateau the spontaneously broken Z2 symmetry is restored by the mechanism of modes collectivization in clusters of different colors. The phenomena of a finite plateau formation holds also for polychromatic networks with M ≥2 colors. The behavior of polychromatic networks is analyzed via the spectral properties of their adjacency and Laplacian matrices.
High Productivity Computing Systems Analysis and Performance
2005-07-01
cubic grid Discrete Math Global Updates per second (GUP/S) RandomAccess Paper & Pencil Contact Bob Lucas (ISI) Multiple Precision none...can be found at the web site. One of the HPCchallenge codes, RandomAccess, is derived from the HPCS discrete math benchmarks that we released, and...Kernels Discrete Math … Graph Analysis … Linear Solvers … Signal Processi ng Execution Bounds Execution Indicators 6 Scalable Compact
Venous tree separation in the liver: graph partitioning using a non-ising model.
O'Donnell, Thomas; Kaftan, Jens N; Schuh, Andreas; Tietjen, Christian; Soza, Grzegorz; Aach, Til
2011-01-01
Entangled tree-like vascular systems are commonly found in the body (e.g., in the peripheries and lungs). Separation of these systems in medical images may be formulated as a graph partitioning problem given an imperfect segmentation and specification of the tree roots. In this work, we show that the ubiquitous Ising-model approaches (e.g., Graph Cuts, Random Walker) are not appropriate for tackling this problem and propose a novel method based on recursive minimal paths for doing so. To motivate our method, we focus on the intertwined portal and hepatic venous systems in the liver. Separation of these systems is critical for liver intervention planning, in particular when resection is involved. We apply our method to 34 clinical datasets, each containing well over a hundred vessel branches, demonstrating its effectiveness.
Threshold-based epidemic dynamics in systems with memory
NASA Astrophysics Data System (ADS)
Bodych, Marcin; Ganguly, Niloy; Krueger, Tyll; Mukherjee, Animesh; Siegmund-Schultze, Rainer; Sikdar, Sandipan
2016-11-01
In this article we analyze an epidemic dynamics model (SI) where we assume that there are k susceptible states, that is a node would require multiple (k) contacts before it gets infected. In specific, we provide a theoretical framework for studying diffusion rate in complete graphs and d-regular trees with extensions to dense random graphs. We observe that irrespective of the topology, the diffusion process could be divided into two distinct phases: i) the initial phase, where the diffusion process is slow, followed by ii) the residual phase where the diffusion rate increases manifold. In fact, the initial phase acts as an indicator for the total diffusion time in dense graphs. The most remarkable lesson from this investigation is that such a diffusion process could be controlled and even contained if acted upon within its initial phase.
Solving Set Cover with Pairs Problem using Quantum Annealing
NASA Astrophysics Data System (ADS)
Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre
2016-09-01
Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.
On a phase diagram for random neural networks with embedded spike timing dependent plasticity.
Turova, Tatyana S; Villa, Alessandro E P
2007-01-01
This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Emergence of cooperation in non-scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Yichao; Aziz-Alaoui, M. A.; Bertelle, Cyrille; Zhou, Shi; Wang, Wenting
2014-06-01
Evolutionary game theory is one of the key paradigms behind many scientific disciplines from science to engineering. Previous studies proposed a strategy updating mechanism, which successfully demonstrated that the scale-free network can provide a framework for the emergence of cooperation. Instead, individuals in random graphs and small-world networks do not favor cooperation under this updating rule. However, a recent empirical result shows the heterogeneous networks do not promote cooperation when humans play a prisoner’s dilemma. In this paper, we propose a strategy updating rule with payoff memory. We observe that the random graphs and small-world networks can provide even better frameworks for cooperation than the scale-free networks in this scenario. Our observations suggest that the degree heterogeneity may be neither a sufficient condition nor a necessary condition for the widespread cooperation in complex networks. Also, the topological structures are not sufficed to determine the level of cooperation in complex networks.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Benchmarking Measures of Network Controllability on Canonical Graph Models
NASA Astrophysics Data System (ADS)
Wu-Yan, Elena; Betzel, Richard F.; Tang, Evelyn; Gu, Shi; Pasqualetti, Fabio; Bassett, Danielle S.
2018-03-01
The control of networked dynamical systems opens the possibility for new discoveries and therapies in systems biology and neuroscience. Recent theoretical advances provide candidate mechanisms by which a system can be driven from one pre-specified state to another, and computational approaches provide tools to test those mechanisms in real-world systems. Despite already having been applied to study network systems in biology and neuroscience, the practical performance of these tools and associated measures on simple networks with pre-specified structure has yet to be assessed. Here, we study the behavior of four control metrics (global, average, modal, and boundary controllability) on eight canonical graphs (including Erdős-Rényi, regular, small-world, random geometric, Barábasi-Albert preferential attachment, and several modular networks) with different edge weighting schemes (Gaussian, power-law, and two nonparametric distributions from brain networks, as examples of real-world systems). We observe that differences in global controllability across graph models are more salient when edge weight distributions are heavy-tailed as opposed to normal. In contrast, differences in average, modal, and boundary controllability across graph models (as well as across nodes in the graph) are more salient when edge weight distributions are less heavy-tailed. Across graph models and edge weighting schemes, average and modal controllability are negatively correlated with one another across nodes; yet, across graph instances, the relation between average and modal controllability can be positive, negative, or nonsignificant. Collectively, these findings demonstrate that controllability statistics (and their relations) differ across graphs with different topologies and that these differences can be muted or accentuated by differences in the edge weight distributions. More generally, our numerical studies motivate future analytical efforts to better understand the mathematical underpinnings of the relationship between graph topology and control, as well as efforts to design networks with specific control profiles.
Evolutionary games on cycles with strong selection
NASA Astrophysics Data System (ADS)
Altrock, P. M.; Traulsen, A.; Nowak, M. A.
2017-02-01
Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast in a dominance game, that fixation takes very long in a coexistence game, and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), then strong selection can remove demographic noise almost entirely.
A comparison of two ambulatory blood pressure monitors worn at the same time.
Kallem, Radhakrishna R; Meyers, Kevin E C; Sawinski, Deirdre L; Townsend, Raymond R
2013-05-01
There are limited data in the literature comparing two simultaneously worn ambulatory blood pressure (BP) monitoring (ABPM) devices. The authors compared BPs from two monitors (Mobil-O-Graph [I.E.M., Stolberg, Germany] and Spacelabs 90207 [Spacelabs Medical, Issequah, WA]). In the nonrandomized component of the study, simultaneous 8-hour BP and heart rate data were measured by Mobil-O-Graph, consistently applied to the nondominant arm, and Spacelabs to the dominant arm on 12 untreated adults. Simultaneous 8-hour BP and heart data were obtained by the same monitors randomly assigned to a dominant or nondominant arm on 12 other untreated adults. Oscillometric BP profiles were obtained in the dominant and nondominant arms of the above 24 patients using an Accutorr (Datascope, Mahwah, NJ) device. The Spacelabs monitor recorded a 10.2-mm Hg higher systolic pressure in the nonrandomized (P=.0016) and a 7.9-mm Hg higher systolic pressure in the randomized studies (P=.00008) compared with the Mobil-O-Graph. The mean arterial pressures were 1 mm Hg to 2 mm Hg different between monitors in the two studies, and heart rates were nearly identical. Our observations, if confirmed in larger cohorts, support the concern that ABPM device manufacturers consider developing normative databases for their devices. ©2013 Wiley Periodicals, Inc.
Takeover times for a simple model of network infection.
Ottino-Löffler, Bertrand; Scott, Jacob G; Strogatz, Steven H
2017-07-01
We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N≫1, the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d-dimensional lattices with d≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.
Entanglement guarantees emergence of cooperation in quantum prisoner's dilemma games on networks.
Li, Angsheng; Yong, Xi
2014-09-05
It was known that cooperation of evolutionary prisoner's dilemma games fails to emerge in homogenous networks such as random graphs. Here we proposed a quantum prisoner's dilemma game. The game consists of two players, in which each player has three choices of strategy: cooperator (C), defector (D) and super cooperator (denoted by Q). We found that quantum entanglement guarantees emergence of a new cooperation, the super cooperation of the quantum prisoner's dilemma games, and that entanglement is the mechanism of guaranteed emergence of cooperation of evolutionary prisoner's dilemma games on networks. We showed that for a game with temptation b, there exists a threshold arccos √b/b for a measurement of entanglement, beyond which, (super) cooperation of evolutionary quantum prisoner's dilemma games is guaranteed to quickly emerge, giving rise to stochastic convergence of the cooperations, that if the entanglement degree γ is less than the threshold arccos √b/b, then the equilibrium frequency of cooperations of the games is positively correlated to the entanglement degree γ, and that if γ is less than arccos √b/b and b is beyond some boundary, then the equilibrium frequency of cooperations of the games on random graphs decreases as the average degree of the graphs increases.
Takeover times for a simple model of network infection
NASA Astrophysics Data System (ADS)
Ottino-Löffler, Bertrand; Scott, Jacob G.; Strogatz, Steven H.
2017-07-01
We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N ≫1 , the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d -dimensional lattices with d ≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.
Generating subtour elimination constraints for the TSP from pure integer solutions.
Pferschy, Ulrich; Staněk, Rostislav
2017-01-01
The traveling salesman problem ( TSP ) is one of the most prominent combinatorial optimization problems. Given a complete graph [Formula: see text] and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach . Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions . In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs .
The influence of graphic format on breast cancer risk communication.
Schapira, Marilyn M; Nattinger, Ann B; McAuliffe, Timothy L
2006-09-01
Graphic displays can enhance quantitative risk communication. However, empiric data regarding the effect of graphic format on risk perception is lacking. We evaluate the effect of graphic format elements on perceptions of risk magnitude and perceived truth of data. Preferences for format also were assessed. Participants (254 female primary care patients) viewed a series of hypothetical risk communications regarding the lifetime risk of breast cancer. Identical numeric risk information was presented using different graphic formats. Risk was perceived to be of lower magnitude when communicated with a bar graph as compared with a pictorial display (p < 0.0001), or with consecutively versus randomly highlighted symbols in a pictorial display (p = 0.0001). Data were perceived to be more true when presented with random versus consecutive highlights in a pictorial display (p < 0.01). A pictorial display was preferred to a bar graph format for the presentation of breast cancer risk estimates alone (p = 0.001). When considering breast cancer risk in comparison to heart disease, stroke, and osteoporosis, however, bar graphs were preferred pictorial displays (p < 0.001). In conclusion, elements of graphic format used to convey quantitative risk information effects key domains of risk perception. One must be cognizant of these effects when designing risk communication strategies.
NeAT: a toolbox for the analysis of biological networks, clusters, classes and pathways.
Brohée, Sylvain; Faust, Karoline; Lima-Mendez, Gipsi; Sand, Olivier; Janky, Rekin's; Vanderstocken, Gilles; Deville, Yves; van Helden, Jacques
2008-07-01
The network analysis tools (NeAT) (http://rsat.ulb.ac.be/neat/) provide a user-friendly web access to a collection of modular tools for the analysis of networks (graphs) and clusters (e.g. microarray clusters, functional classes, etc.). A first set of tools supports basic operations on graphs (comparison between two graphs, neighborhood of a set of input nodes, path finding and graph randomization). Another set of programs makes the connection between networks and clusters (graph-based clustering, cliques discovery and mapping of clusters onto a network). The toolbox also includes programs for detecting significant intersections between clusters/classes (e.g. clusters of co-expression versus functional classes of genes). NeAT are designed to cope with large datasets and provide a flexible toolbox for analyzing biological networks stored in various databases (protein interactions, regulation and metabolism) or obtained from high-throughput experiments (two-hybrid, mass-spectrometry and microarrays). The web interface interconnects the programs in predefined analysis flows, enabling to address a series of questions about networks of interest. Each tool can also be used separately by entering custom data for a specific analysis. NeAT can also be used as web services (SOAP/WSDL interface), in order to design programmatic workflows and integrate them with other available resources.
Experimental quantum annealing: case study involving the graph isomorphism problem.
Zick, Kenneth M; Shehab, Omar; French, Matthew
2015-06-08
Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.
Experimental quantum annealing: case study involving the graph isomorphism problem
Zick, Kenneth M.; Shehab, Omar; French, Matthew
2015-01-01
Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973
Zhang, Qin
2015-07-01
Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
Quantum Optimization of Fully Connected Spin Glasses
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim
2015-07-01
Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures.
Benson, Austin R; Gleich, David F; Leskovec, Jure
2015-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Benson, Austin R.; Gleich, David F.; Leskovec, Jure
2016-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399
Interrelations between random walks on diagrams (graphs) with and without cycles.
Hill, T L
1988-05-01
Three topics are discussed. A discrete-state, continuous-time random walk with one or more absorption states can be studied by a presumably new method: some mean properties, including the mean time to absorption, can be found from a modified diagram (graph) in which each absorption state is replaced by a one-way cycle back to the starting state. The second problem is a random walk on a diagram (graph) with cycles. The walk terminates on completion of the first cycle. This walk can be replaced by an equivalent walk on a modified diagram with absorption. This absorption diagram can in turn be replaced by another modified diagram with one-way cycles back to the starting state, just as in the first problem. The third problem, important in biophysics, relates to a long-time continuous walk on a diagram with cycles. This diagram can be transformed (in two steps) to a modified, more-detailed, diagram with one-way cycles only. Thus, the one-way cycle fluxes of the original diagram can be found from the state probabilities of the modified diagram. These probabilities can themselves be obtained by simple matrix inversion (the probabilities are determined by linear algebraic steady-state equations). Thus, a simple method is now available to find one-way cycle fluxes exactly (previously Monte Carlo simulation was required to find these fluxes, with attendant fluctuations, for diagrams of any complexity). An incidental benefit of the above procedure is that it provides a simple proof of the one-way cycle flux relation Jn +/- = IIn +/- sigma n/sigma, where n is any cycle of the original diagram.
Image analysis of oronasal fistulas in cleft palate patients acquired with an intraoral camera.
Murphy, Tania C; Willmot, Derrick R
2005-01-01
The aim of this study was to examine the clinical technique of using an intraoral camera to monitor the size of residual oronasal fistulas in cleft lip-cleft palate patients, to assess its repeatability on study casts and patients, and to compare its use with other methods. Seventeen plaster study casts of cleft palate patients with oronasal fistulas obtained from a 5-year series of 160 patients were used. For the clinical study, 13 patients presenting in a clinic prospectively over a 1-year period were imaged twice by the camera. The area of each fistula on each study cast was measured in the laboratory first using a previously described graph paper and caliper technique and second with the intraoral camera. Images were imported into a computer and subjected to image enhancement and area measurement. The camera was calibrated by imaging a standard periodontal probe within the fistula area. The measurements were repeated using a double-blind technique on randomly renumbered casts to assess the repeatability of measurement of the methods. The clinical images were randomly and blindly numbered and subjected to image enhancement and processing in the same way as for the study casts. Area measurements were computed. Statistical analysis of repeatability of measurement using a paired sample t test showed no significant difference between measurements, indicating a lack of systematic error. An intraclass correlation coefficient of 0.97 for the graph paper and 0.84 for the camera method showed acceptable random error between the repeated records for each of the two methods. The graph paper method remained slightly more repeatable. The mean fistula area of the study casts between each method was not statistically different when compared with a paired samples t test (p = 0.08). The methods were compared using the limits of agreement technique, which showed clinically acceptable repeatability. The clinical study of repeated measures showed no systematic differences when subjected to a t test (p = 0.109) and little random error with an intraclass correlation coefficient of 0.98. The fistula size seen in the clinical study ranged from 18.54 to 271.55 mm. Direct measurements subsequently taken on 13 patients in the clinic without study models showed a wide variation in the size of residual fistulas presenting in a multidisciplinary clinic. It was concluded that an intraoral camera method could be used in place of the previous graph paper method and could be developed for clinical and scientific purposes. This technique may offer advantages over the graph paper method, as it facilitates easy visualization of oronasal fistulas and objective fistulas size determination and permits easy storage of data in clinical records.
Chung, Dongjun; Kim, Hang J; Zhao, Hongyu
2017-02-01
Genome-wide association studies (GWAS) have identified tens of thousands of genetic variants associated with hundreds of phenotypes and diseases, which have provided clinical and medical benefits to patients with novel biomarkers and therapeutic targets. However, identification of risk variants associated with complex diseases remains challenging as they are often affected by many genetic variants with small or moderate effects. There has been accumulating evidence suggesting that different complex traits share common risk basis, namely pleiotropy. Recently, several statistical methods have been developed to improve statistical power to identify risk variants for complex traits through a joint analysis of multiple GWAS datasets by leveraging pleiotropy. While these methods were shown to improve statistical power for association mapping compared to separate analyses, they are still limited in the number of phenotypes that can be integrated. In order to address this challenge, in this paper, we propose a novel statistical framework, graph-GPA, to integrate a large number of GWAS datasets for multiple phenotypes using a hidden Markov random field approach. Application of graph-GPA to a joint analysis of GWAS datasets for 12 phenotypes shows that graph-GPA improves statistical power to identify risk variants compared to statistical methods based on smaller number of GWAS datasets. In addition, graph-GPA also promotes better understanding of genetic mechanisms shared among phenotypes, which can potentially be useful for the development of improved diagnosis and therapeutics. The R implementation of graph-GPA is currently available at https://dongjunchung.github.io/GGPA/.
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
Park, Yong-Beom; Ha, Chul-Won; Cho, Sung-Do; Lee, Myung-Chul; Lee, Ju-Hong; Seo, Seung-Suk; Kang, Seung-Baik; Kyung, Hee-Soo; Choi, Choong-Hyeok; Chang, NaYoon; Rhim, Hyou Young Helen; Bin, Seong-Il
2015-01-01
To evaluate the relative efficacy and safety of extended-release tramadol HCl 75 mg/acetaminophen 650 mg (TA-ER) and immediate-release tramadol HCl 37.5 mg/acetaminophen 325 mg (TA-IR) for the treatment of moderate to severe acute pain following total knee replacement. This phase III, double-blind, placebo-controlled, parallel-group study randomized 320 patients with moderate to severe pain (≥4 intensity on an 11 point numeric rating scale) following total knee replacement arthroplasty to receive oral TA-ER (every 12 hours) or TA-IR (every 6 hours) over a period of 48 hours. In the primary analysis, TA-ER was evaluated for efficacy non-inferior to that of TA-IR based on the sum of pain intensity difference (SPID) at 48 hours after the first dose of study drug (SPID48). Secondary endpoints included SPID at additional time points, total pain relief at all on-therapy time points (TOTPAR), sum of SPID and TOTPAR at all on-therapy time points (SPID + TOTPAR), use of rescue medication, subjective pain assessment (PGIC, Patient Global Impression of Change), and adverse events (AEs). Analysis of the primary efficacy endpoint (SPID48) could not establish the non-inferiority of TA-ER to TA-IR. However, a post hoc analysis with a re-defined non-inferiority margin did demonstrate the non-inferiority of TA-ER to TA-IR. No statistically significant difference in SPID at 6, 12, or 24 hours was observed between the TA-ER and TA-IR groups. Similarly, analysis of TOTPAR showed that there were no significant differences between groups at any on-therapy time point, and SPID + TOTPAR at 6 and 48 hours were similar among groups. There was no difference in the mean frequency or dosage of rescue medication required by both groups, and the majority of patients in both the TA-ER and TA-IR groups rated their pain improvement as 'much' or 'somewhat better'. The overall incidence of ≥1 AEs was similar among the TA-ER (88.8%) and TA-IR (89.5%) groups. The most commonly reported AEs by patients treated with TA-ER and TA-IR included nausea (49.7% vs 44.4%), vomiting (28.0% vs 24.2%), and decreased hemoglobin (23.6% vs 26.1%). This study is limited by the lack of placebo control, and the invalidity of the initial non-inferiority margin. This study demonstrated that the analgesic effect of TA-ER is non-inferior to TA-IR, and supports TA-ER as an effective and safe treatment for moderate to severe acute pain post total knee replacement. Clinicaltrials.gov, NCT01814878.
Effects of Self-Image on Anxiety, Judgement Bias and Emotion Regulation in Social Anxiety Disorder.
Lee, Hannah; Ahn, Jung-Kwang; Kwon, Jung-Hye
2018-04-25
Research to date has focused on the detrimental effects of negative self-images for individuals with social anxiety disorder (SAD), but the benefits of positive self-images have been neglected. The present study examined the effect of holding a positive versus negative self-image in mind on anxiety, judgement bias and emotion regulation (ER) in individuals with SAD. Forty-two individuals who met the diagnostic criteria for SAD were randomly assigned to either a positive or a negative self-image group. Participants were assessed twice with a week's interval in between using the Reactivity and Regulation Situation Task, which measures social anxiety, discomfort, judgement bias and ER, prior to and after the inducement of a positive or negative self-image. Individuals in the positive self-image group reported less social anxiety, discomfort and distress from social cost when compared with their pre-induction state. They also used more adaptive ER strategies and experienced less anxiety and discomfort after using ER. In contrast, individuals in the negative self-image group showed no significant differences in anxiety, judgement bias or ER strategies before and after the induction. This study highlights the beneficial effects of positive self-images on social anxiety and ER.
Evaluation of the bond strength of resin cements used to lute ceramics on laser-etched dentin.
Giray, Figen Eren; Duzdar, Lale; Oksuz, Mustafa; Tanboga, Ilknur
2014-07-01
The purpose of this study was to investigate the shear bond strength (SBS) of two different adhesive resin cements used to lute ceramics on laser-etched dentin. Erbium, chromium: yttrium, scandium, gallium, garnet (Er,Cr:YSGG) laser irradiation has been claimed to improve the adhesive properties of dentin, but results to date have been controversial, and its compatibility with existing adhesive resin cements has not been conclusively determined. Two adhesive cements, one "etch-and-rinse" [Variolink II (V)] and one "self-etch" [Clearfil Esthetic Cement (C)] luting cement, were used to lute ceramic blocks (Vita Celay Blanks, Vita) onto dentin surfaces. In total, 80 dentin specimens were distributed randomly into eight experimental groups according to the dentin surface-etching technique used Er,Cr:YSGG laser and Er:YAG laser: (1) 37% orthophosphoric acid+V (control group), (2) Er,Cr:YSGG laser+V, (3) Er,Cr:YSGG laser+acid+V, (4) Er:YAG laser+V, (5) Er:YAG laser+acid+V, (6) C, (7) Er,Cr:YSGG laser+C, and (8) Er:YAG laser+C. Following these applications, the ceramic discs were bonded to prepared surfaces and were shear loaded in a universal testing machine until fracture. SBS was recorded for each group in MPa. Shear test values were evaluated statistically using the Mann-Whitney U test. No statistically significant differences were evident between the control group and the other groups (p>0.05). The Er,Cr:YSGG laser+A+V group demonstrated significantly higher SBS than did the Er,Cr:YSGG laser+V group (p=0.034). The Er,Cr:YSGG laser+C and Er:YAG laser+C groups demonstrated significantly lower SBS than did the C group (p<0.05). Dentin surfaces prepared with lasers may provide comparable ceramic bond strengths, depending upon the adhesive cement used.
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
González-Durruthy, Michael; Monserrat, Jose M; Rasulev, Bakhtiyor; Casañola-Martín, Gerardo M; Barreiro Sorrivas, José María; Paraíso-Medina, Sergio; Maojo, Víctor; González-Díaz, Humberto; Pazos, Alejandro; Munteanu, Cristian R
2017-11-11
This study presents the impact of carbon nanotubes (CNTs) on mitochondrial oxygen mass flux ( J m ) under three experimental conditions. New experimental results and a new methodology are reported for the first time and they are based on CNT Raman spectra star graph transform (spectral moments) and perturbation theory. The experimental measures of J m showed that no tested CNT family can inhibit the oxygen consumption profiles of mitochondria. The best model for the prediction of J m for other CNTs was provided by random forest using eight features, obtaining test R-squared ( R ²) of 0.863 and test root-mean-square error (RMSE) of 0.0461. The results demonstrate the capability of encoding CNT information into spectral moments of the Raman star graphs (SG) transform with a potential applicability as predictive tools in nanotechnology and material risk assessments.
Efficient quantum walk on a quantum processor
Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.
2016-01-01
The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471
NASA Astrophysics Data System (ADS)
Matsutani, Shigeki; Sato, Iwao
2017-09-01
In the previous report (Matsutani and Suzuki, 2000 [21]), by proposing the mechanism under which electric conductivity is caused by the activational hopping conduction with the Wigner surmise of the level statistics, the temperature-dependent of electronic conductivity of a highly disordered carbon system was evaluated including apparent metal-insulator transition. Since the system consists of small pieces of graphite, it was assumed that the reason why the level statistics appears is due to the behavior of the quantum chaos in each granular graphite. In this article, we revise the assumption and show another origin of the Wigner surmise, which is more natural for the carbon system based on a recent investigation of graph zeta function in graph theory. Our method can be applied to the statistical treatment of the electronic properties of the randomized molecular system in general.
Lerner, Debra; Chang, Hong; Rogers, William H; Benson, Carmela; Chow, Wing; Kim, Myoung S; Biondi, David
2012-08-01
: To determine the impact of tapentadol extended release (ER) versus placebo or oxycodone controlled release (CR) on the work productivity of adults with chronic moderate to severe knee osteoarthritis pain. : Using clinical trial data on pain outcomes, a validated methodology imputed treatment group differences in at-work productivity and associated differences in productivity costs (assuming a $100,000 annual salary per participant). : Imputed improvements in at-work productivity were significantly greater for tapentadol ER compared with either placebo (mean, 1.96% vs 1.51%; P = 0.001) or oxycodone CR (mean, 1.96% vs 1.40%; P < 0.001). Mean net savings per participant were $450 (P < 0.01) for tapentadol ER versus placebo and $560 (P = 0.001) for tapentadol ER versus oxycodone CR. : Effective osteoarthritis pain treatment also may help employees to function better at work and reduce their employers' productivity costs.
Contact replacement for NMR resonance assignment.
Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris
2008-07-01
Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
The correlation of metrics in complex networks with applications in functional brain networks
NASA Astrophysics Data System (ADS)
Li, C.; Wang, H.; de Haan, W.; Stam, C. J.; Van Mieghem, P.
2011-11-01
An increasing number of network metrics have been applied in network analysis. If metric relations were known better, we could more effectively characterize networks by a small set of metrics to discover the association between network properties/metrics and network functioning. In this paper, we investigate the linear correlation coefficients between widely studied network metrics in three network models (Bárabasi-Albert graphs, Erdös-Rényi random graphs and Watts-Strogatz small-world graphs) as well as in functional brain networks of healthy subjects. The metric correlations, which we have observed and theoretically explained, motivate us to propose a small representative set of metrics by including only one metric from each subset of mutually strongly dependent metrics. The following contributions are considered important. (a) A network with a given degree distribution can indeed be characterized by a small representative set of metrics. (b) Unweighted networks, which are obtained from weighted functional brain networks with a fixed threshold, and Erdös-Rényi random graphs follow a similar degree distribution. Moreover, their metric correlations and the resultant representative metrics are similar as well. This verifies the influence of degree distribution on metric correlations. (c) Most metric correlations can be explained analytically. (d) Interestingly, the most studied metrics so far, the average shortest path length and the clustering coefficient, are strongly correlated and, thus, redundant. Whereas spectral metrics, though only studied recently in the context of complex networks, seem to be essential in network characterizations. This representative set of metrics tends to both sufficiently and effectively characterize networks with a given degree distribution. In the study of a specific network, however, we have to at least consider the representative set so that important network properties will not be neglected.
Choi, S H; Kim, K H; Song, K-H
2015-07-01
Early identification and treatment of actinic cheilitis (AC) is recommended. Although photodynamic therapy (PDT) is an attractive therapeutic option for AC, PDT for AC does not result in the same satisfactory outcomes as in actinic keratosis (AK). The aim of our study was to compare efficacy, recurrence rate, cosmetic outcome and safety between erbium:yttrium-aluminium-garnet ablative fractional laser-assisted methyl aminolaevulinate-PDT (Er:YAG AFL MAL-PDT) and standard MAL-PDT. Thirty-three patients with histologically confirmed AC randomly received either one session of Er:YAG AFL MAL-PDT or two sessions of MAL-PDT. In the MAL-PDT group, the second session of MAL-PDT was administered 7 days later. Patients were followed up at 1 week and 3 and 12 months, and biopsies were taken from all patients at 3 and 12 months after the last treatment session. At the final 12-month follow-up, cosmetic outcomes were assessed. Adverse events were assessed at week 1 of the treatment phase and every subsequent follow-up visit. In the per-protocol (PP) population, Er:YAG AFL MAL-PDT was significantly more effective (92% complete response rate) than MAL-PDT (59%; P = 0.040) at the 3-month follow-up, and differences in efficacy remained significant at the 12-month follow-up (85% in Er:YAG AFL MAL-PDT and 29% in MAL-PDT). The recurrence rate was significantly lower for Er:YAG AFL MAL-PDT (8%) than for MAL-PDT (50%) group at 12 months (P = 0.029). No significant difference in cosmetic outcome or safety was observed between Er:YAG AFL MAL-PDT and MAL-PDT. Ablative fractional laser pretreatment has significant benefit for the treatment of AC with PDT. © 2014 British Association of Dermatologists.
Watanabe, Yoshinori; Asami, Yuko; Hirano, Yoko; Kuribayashi, Kazuhiko; Itamura, Rio; Imaeda, Takayuki
2018-01-01
Purpose To explore the potential factors impacting the efficacy of venlafaxine extended release (ER) and treatment differences between 75 mg/day and 75–225 mg/day dose in patients with major depressive disorder (MDD). Methods We performed exploratory post hoc subgroup analyses of a randomized, double-blind, placebo-controlled study conducted in Japan. A total of 538 outpatients aged 20 years or older with a primary diagnosis of MDD who experienced single or recurrent episodes were randomized into three groups: fixed-dose, flexible-dose, or placebo. Venlafaxine ER was initiated at 37.5 mg/day and titrated to 75 mg/day for both fixed-dose and flexible-dose group, and to 225 mg/day for flexible-dose group (if well tolerated). Efficacy endpoints were changes from baseline at Week 8 using the Hamilton Rating Scale for Depression–17 items (HAM-D17) total score, Hamilton Rating Scale for Depression–6 items score, and Montgomery–Asberg Depression Rating Scale total score. The following factors were considered in the subgroup analyses: sex, age, HAM-D17 total score at baseline, duration of MDD, duration of current depressive episode, history of previous depressive episodes, history of previous medications for MDD, and CYP2D6 phenotype. For each subgroup, an analysis of covariance model was fitted and the adjusted mean of the treatment effect and corresponding 95% CI were computed. Due to the exploratory nature of the investigation, no statistical hypothesis testing was used. Results Venlafaxine ER improved symptoms of MDD compared with placebo in most subgroups. The subgroup with a long duration of MDD (>22 months) consistently showed greater treatment benefits in the flexible-dose group than in the fixed-dose group. Conclusion These results suggest that a greater treatment response to venlafaxine ER (up to 225 mg/day) can be seen in patients with a longer duration of MDD. Further investigations are needed to identify additional factors impacting the efficacy of venlafaxine ER. PMID:29844674
Watanabe, Yoshinori; Asami, Yuko; Hirano, Yoko; Kuribayashi, Kazuhiko; Itamura, Rio; Imaeda, Takayuki
2018-01-01
To explore the potential factors impacting the efficacy of venlafaxine extended release (ER) and treatment differences between 75 mg/day and 75-225 mg/day dose in patients with major depressive disorder (MDD). We performed exploratory post hoc subgroup analyses of a randomized, double-blind, placebo-controlled study conducted in Japan. A total of 538 outpatients aged 20 years or older with a primary diagnosis of MDD who experienced single or recurrent episodes were randomized into three groups: fixed-dose, flexible-dose, or placebo. Venlafaxine ER was initiated at 37.5 mg/day and titrated to 75 mg/day for both fixed-dose and flexible-dose group, and to 225 mg/day for flexible-dose group (if well tolerated). Efficacy endpoints were changes from baseline at Week 8 using the Hamilton Rating Scale for Depression-17 items (HAM-D 17 ) total score, Hamilton Rating Scale for Depression-6 items score, and Montgomery-Asberg Depression Rating Scale total score. The following factors were considered in the subgroup analyses: sex, age, HAM-D 17 total score at baseline, duration of MDD, duration of current depressive episode, history of previous depressive episodes, history of previous medications for MDD, and CYP2D6 phenotype. For each subgroup, an analysis of covariance model was fitted and the adjusted mean of the treatment effect and corresponding 95% CI were computed. Due to the exploratory nature of the investigation, no statistical hypothesis testing was used. Venlafaxine ER improved symptoms of MDD compared with placebo in most subgroups. The subgroup with a long duration of MDD (>22 months) consistently showed greater treatment benefits in the flexible-dose group than in the fixed-dose group. These results suggest that a greater treatment response to venlafaxine ER (up to 225 mg/day) can be seen in patients with a longer duration of MDD. Further investigations are needed to identify additional factors impacting the efficacy of venlafaxine ER.
Argov, Zohar; Caraco, Yoseph; Lau, Heather; Pestronk, Alan; Shieh, Perry B; Skrinar, Alison; Koutsoukos, Tony; Ahmed, Ruhi; Martinisi, Julia; Kakkis, Emil
2016-03-03
GNE Myopathy (GNEM) is a progressive adult-onset myopathy likely caused by deficiency of sialic acid (SA) biosynthesis. Evaluate the safety and efficacy of SA (delivered by aceneuramic acid extended-release [Ace-ER]) as treatment for GNEM. A Phase 2, randomized, double-blind, placebo-controlled study evaluating Ace-ER 3 g/day or 6 g/day versus placebo was conducted in GNEM subjects (n = 47). After the first 24 weeks, placebo subjects crossed over to 3 g/day or 6 g/day for 24 additional weeks (dose pre-assigned during initial randomization). Assessments included serum SA, muscle strength by dynamometry, functional assessments, clinician- and patient-reported outcomes, and safety. Dose-dependent increases in serum SA levels were observed. Supplementation with Ace-ER resulted in maintenance of muscle strength in an upper extremity composite (UEC) score at 6 g/day compared with placebo at Week 24 (LS mean difference +2.33 kg, p = 0.040), and larger in a pre-specified subgroup able to walk ≥200 m at Screening (+3.10 kg, p = 0.040). After cross-over, a combined 6 g/day group showed significantly better UEC strength than a combined 3 g/day group (+3.46 kg, p = 0.0031). A similar dose-dependent response was demonstrated within the lower extremity composite score, but was not significant (+1.06 kg, p = 0.61). The GNEM-Functional Activity Scale demonstrated a trend improvement in UE function and mobility in a combined 6 g/day group compared with a combined 3 g/day group. Patients receiving Ace-ER tablets had predominantly mild-to-moderate AEs and no serious adverse events. This is the first clinical study to provide evidence that supplementation with SA delivered by Ace-ER may stabilize muscle strength in individuals with GNEM and initiating treatment earlier in the disease course may lead to better outcomes.
Argov, Zohar; Caraco, Yoseph; Lau, Heather; Pestronk, Alan; Shieh, Perry B.; Skrinar, Alison; Koutsoukos, Tony; Ahmed, Ruhi; Martinisi, Julia; Kakkis, Emil
2016-01-01
Background: GNE Myopathy (GNEM) is a progressive adult-onset myopathy likely caused by deficiency of sialic acid (SA) biosynthesis. Objective: Evaluate the safety and efficacy of SA (delivered by aceneuramic acid extended-release [Ace-ER]) as treatment for GNEM. Methods: A Phase 2, randomized, double-blind, placebo-controlled study evaluating Ace-ER 3 g/day or 6 g/day versus placebo was conducted in GNEM subjects (n = 47). After the first 24 weeks, placebo subjects crossed over to 3 g/day or 6 g/day for 24 additional weeks (dose pre-assigned during initial randomization). Assessments included serum SA, muscle strength by dynamometry, functional assessments, clinician- and patient-reported outcomes, and safety. Results: Dose-dependent increases in serum SA levels were observed. Supplementation with Ace-ER resulted in maintenance of muscle strength in an upper extremity composite (UEC) score at 6 g/day compared with placebo at Week 24 (LS mean difference +2.33 kg, p = 0.040), and larger in a pre-specified subgroup able to walk ≥200 m at Screening (+3.10 kg, p = 0.040). After cross-over, a combined 6 g/day group showed significantly better UEC strength than a combined 3 g/day group (+3.46 kg, p = 0.0031). A similar dose-dependent response was demonstrated within the lower extremity composite score, but was not significant (+1.06 kg, p = 0.61). The GNEM-Functional Activity Scale demonstrated a trend improvement in UE function and mobility in a combined 6 g/day group compared with a combined 3 g/day group. Patients receiving Ace-ER tablets had predominantly mild-to-moderate AEs and no serious adverse events. Conclusions: This is the first clinical study to provide evidence that supplementation with SA delivered by Ace-ER may stabilize muscle strength in individuals with GNEM and initiating treatment earlier in the disease course may lead to better outcomes. PMID:27854209
2014-01-01
Background Integrating and analyzing heterogeneous genome-scale data is a huge algorithmic challenge for modern systems biology. Bipartite graphs can be useful for representing relationships across pairs of disparate data types, with the interpretation of these relationships accomplished through an enumeration of maximal bicliques. Most previously-known techniques are generally ill-suited to this foundational task, because they are relatively inefficient and without effective scaling. In this paper, a powerful new algorithm is described that produces all maximal bicliques in a bipartite graph. Unlike most previous approaches, the new method neither places undue restrictions on its input nor inflates the problem size. Efficiency is achieved through an innovative exploitation of bipartite graph structure, and through computational reductions that rapidly eliminate non-maximal candidates from the search space. An iterative selection of vertices for consideration based on non-decreasing common neighborhood sizes boosts efficiency and leads to more balanced recursion trees. Results The new technique is implemented and compared to previously published approaches from graph theory and data mining. Formal time and space bounds are derived. Experiments are performed on both random graphs and graphs constructed from functional genomics data. It is shown that the new method substantially outperforms the best previous alternatives. Conclusions The new method is streamlined, efficient, and particularly well-suited to the study of huge and diverse biological data. A robust implementation has been incorporated into GeneWeaver, an online tool for integrating and analyzing functional genomics experiments, available at http://geneweaver.org. The enormous increase in scalability it provides empowers users to study complex and previously unassailable gene-set associations between genes and their biological functions in a hierarchical fashion and on a genome-wide scale. This practical computational resource is adaptable to almost any applications environment in which bipartite graphs can be used to model relationships between pairs of heterogeneous entities. PMID:24731198
Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong
2018-02-12
Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Crichton, Gamal; Guo, Yufan; Pyysalo, Sampo; Korhonen, Anna
2018-05-21
Link prediction in biomedical graphs has several important applications including predicting Drug-Target Interactions (DTI), Protein-Protein Interaction (PPI) prediction and Literature-Based Discovery (LBD). It can be done using a classifier to output the probability of link formation between nodes. Recently several works have used neural networks to create node representations which allow rich inputs to neural classifiers. Preliminary works were done on this and report promising results. However they did not use realistic settings like time-slicing, evaluate performances with comprehensive metrics or explain when or why neural network methods outperform. We investigated how inputs from four node representation algorithms affect performance of a neural link predictor on random- and time-sliced biomedical graphs of real-world sizes (∼ 6 million edges) containing information relevant to DTI, PPI and LBD. We compared the performance of the neural link predictor to those of established baselines and report performance across five metrics. In random- and time-sliced experiments when the neural network methods were able to learn good node representations and there was a negligible amount of disconnected nodes, those approaches outperformed the baselines. In the smallest graph (∼ 15,000 edges) and in larger graphs with approximately 14% disconnected nodes, baselines such as Common Neighbours proved a justifiable choice for link prediction. At low recall levels (∼ 0.3) the approaches were mostly equal, but at higher recall levels across all nodes and average performance at individual nodes, neural network approaches were superior. Analysis showed that neural network methods performed well on links between nodes with no previous common neighbours; potentially the most interesting links. Additionally, while neural network methods benefit from large amounts of data, they require considerable amounts of computational resources to utilise them. Our results indicate that when there is enough data for the neural network methods to use and there are a negligible amount of disconnected nodes, those approaches outperform the baselines. At low recall levels the approaches are mostly equal but at higher recall levels and average performance at individual nodes, neural network approaches are superior. Performance at nodes without common neighbours which indicate more unexpected and perhaps more useful links account for this.
Suzuki, Reiko; Orsini, Nicola; Saji, Shigehira; Key, Timothy J; Wolk, Alicja
2009-02-01
Epidemiological evidence indicates that the association between body weight and breast cancer risk may differ across menopausal status as well as the estrogen receptor (ER) and progesterone receptor (PR) tumor status. To date, no meta-analysis has been conducted to assess the association between body weight and ER/PR defined breast cancer risk, taking into account menopausal status and study design. We searched MEDLINE for relevant studies published from January 1, 1970 through December 31, 2007. Summarized risk estimates with 95% confidence intervals (CIs) were calculated using a random-effects model. The summarized results of 9 cohorts and 22 case-control studies comparing the highest versus the reference categories of relative body weight showed that the risk for ER+PR+ tumors was 20% lower (95% CI=-30% to -8%) among premenopausal (2,643 cases) and 82% higher (95% CI=55-114%) among postmenopausal (5,469 cases) women. The dose-response meta-analysis of ER+PR+ tumors showed that each 5-unit increase in body mass index (BMI, kg/m2) was associated with a 33% increased risk among postmenopausal women (95% CI=20-48%) and 10% decreased risk among premenopausal women (95% CI=-18% to -1%). No associations were observed for ER-PR- or ER+PR- tumors. For discordant tumors ER+PR- (pre) and ER-PR+ (pre/post) the number of cases were too small (<200) to interpret results. The relation between body weight and breast cancer risk is critically dependent on the tumor's ER/PR status and the woman's menopausal status. Body weight control is the effective strategy for preventing ER+PR+ tumors after menopause. Copyright (c) 2008 Wiley-Liss, Inc.
Schwartz, Ann G; Wenzlaff, Angela S; Prysak, Geoffrey M; Murphy, Valerie; Cote, Michele L; Brooks, Sam C; Skafar, Debra F; Lonardo, Fulvio
2007-12-20
Estrogen receptor (ER) expression in lung tumors suggests that estrogens may play a role in the development of lung cancer. We evaluated the role of hormone-related factors in determining risk of non-small-cell lung cancer (NSCLC) in women. We also evaluated whether risk factors were differentially associated with cytoplasmic ER-alpha and/or nuclear ER-beta expression-defined NSCLC in postmenopausal women. Population-based participants included women aged 18 to 74 years diagnosed with NSCLC in metropolitan Detroit between November 1, 2001 and October 31, 2005. Population-based controls were identified through random digit dialing, matched to patient cases on race and 5-year age group. Interview data were analyzed for 488 patient cases (241 with tumor ER results) and 498 controls. Increased duration of hormone replacement therapy (HRT) use in quartiles was associated with decreased risk of NSCLC in postmenopausal women (odds ratio = 0.88; 95% CI, 0.78 to 1.00; P = .04), adjusting for age, race, pack-years, education, family history of lung cancer, current body mass index, years exposed to second-hand smoke in the workplace, and obstructive lung disease history. Among postmenopausal women, ever using HRT, increasing HRT duration of use in quartiles, and increasing quartiles of estrogen use were significant predictors of reduced risk of NSCLC characterized as ER-alpha and/or ER-beta positive. None of the hormone-related variables were associated with nuclear ER-alpha- or ER-beta-negative NSCLC. These findings suggest that postmenopausal hormone exposures are associated with reduced risk of ER-alpha- and ER-beta-expressing NSCLC. Understanding tumor characteristics may direct development of targeted treatment for this disease.
Petty, Amanda J; Melanson, Kathleen J; Greene, Geoffrey W
2013-04-01
Methodological differences may be responsible for variable results from eating rate (ER) studies. It is unknown whether self-reported, lab-measured, and free-living ER's align. This study was the first to explore relationships among self-reported, laboratory-measured and free-living ER's. We investigated this relationship in 60 randomly selected male and female college students who were stratified by self-reported eating rate (SRER) (Slow, Medium, and Fast) from 1110 on-line survey respondents. Test day; subjects ate a prescribed breakfast (∼400kcal) at home, recording meal duration (MD); 4h later they individually ate an ad libitum laboratory pasta lunch at their own (natural) pace; remainder of the day they recorded free-living intake and MD. As expected the three self-reported ER categories aligned with lab ER (Fast=83.9±5.5, Medium=63.1±5.2, Slow=53.0±5.4kcals/min). In all ER categories at all meals, men ate faster than women (Men=80.6±30.7kcals/min: Women=52.0±21.6kcals/min). A difference in lab measured ER by SRER F=(2, 58)=7.677, post hoc Tukey analysis found fast differed from medium and slow. The three free-living meal ER's did not align with self-report categories. Findings suggest various methods of measuring ER may yield differing results, at least in this population, but results support the use of SRER as a valid measure. Copyright © 2012 Elsevier Ltd. All rights reserved.
Synchronisation of networked Kuramoto oscillators under stable Lévy noise
NASA Astrophysics Data System (ADS)
Kalloniatis, Alexander C.; Roberts, Dale O.
2017-01-01
We study the Kuramoto model on several classes of network topologies examining the dynamics under the influence of Lévy noise. Such noise exhibits heavier tails than Gaussian and allows us to understand how 'shocks' influence the individual oscillator and collective system behaviour. Skewed α-stable Lévy noise, equivalent to fractional diffusion perturbations, are considered. We perform numerical simulations for Erdős-Rényi (ER) and Barabási-Albert (BA) scale free networks of size N = 1000 while varying the Lévy index α for the noise. We find that synchrony now assumes a surprising variety of forms, not seen for Gaussian-type noise, and changing with α: a noise-generated drift, a smooth α dependence of the point of cross-over of ER and BA networks in the degree of synchronisation, and a severe loss of synchronisation at low values of α. We also show that this robustness of the BA network across most values of α can also be understood as a consequence of the Laplacian of the graph working within the fractional Fokker-Planck equation of the linearised system, close to synchrony, with both eigenvalues and eigenvectors alternately contributing in different regimes of α.
Robati, Reza M; Asadi, Elmira
2017-02-01
Ablative fractional lasers were introduced for treating facial rhytides. Few studies have compared fractional CO 2 and Er:YAG lasers on cutaneous photodamages by a split trial. The aim of the present study was to compare these modalities in a randomized controlled double-blind split-face design with multiple sessions and larger sample size compared to previous studies done before. Forty patients with facial wrinkles were enrolled. Patients were randomly assigned to receive three monthly treatments on each side of the face, one with a fractional CO 2 and one with a fractional Er:YAG laser. The evaluations included investigating clinical outcome determined by two independent dermatologists not enrolled in the treatment along with measuring skin biomechanical property of cheeks using a sensitive biometrologic device with the assessment of cutaneous resonance running time (CRRT). Moreover, possible side effects and patients' satisfaction have been recorded at baseline, 1 month after each treatment, and 3 months after the last treatment session. Clinical assessment showed both modalities significantly reduce facial wrinkles (p value < 0.05), with no appreciable difference between two lasers. Mean CRRT values also decreased significantly after the laser treatment compared to the baseline in both laser groups. There was no serious long-standing adverse effect after both laser treatments, but the discomfort was more pronounced by the participants after CO 2 laser treatment. According to the present study, both fractional CO 2 and fractional Er:YAG lasers show considerable clinical improvement of facial skin wrinkles with no serious adverse effects, but post-treatment discomfort seems to be lower with Er:YAG laser.
Robati, Reza M; Asadi, Elmira; Shafiee, Anoosh; Namazi, Nastaran; Talebi, Atefeh
2018-04-01
There are different modalities for hand rejuvenation. Fractional Er:YAG laser and long pulse Nd:YAG laser were introduced for treating hand wrinkles. We plan to compare fractional Er:YAG laser and long pulse Nd:YAG laser in a randomized controlled double-blind design with multiple sessions and larger sample size in comparison with previous studies. Thirty-three participants with hand wrinkles entered this study. They were randomly allocated to undergo three monthly laser treatments on each hand, one with a fractional Er:YAG laser and the other with a long pulse Nd:YAG laser. The evaluations included assessment of clinical improvement determined by two independent dermatologists not enrolled in the treatment along with measuring skin biomechanical property of hands using a sensitive biometrologic device with the assessment of cutaneous resonance running time (CRRT). Moreover, potential side effects and patients' satisfaction have been documented at baseline, 1 month after each treatment, and 3 months after the final treatment session. Clinical evaluation revealed both modalities significantly reduce hand wrinkles (p value < 0.05), with no significant difference between two lasers. Mean CRRT values also decreased significantly after the laser treatment compared to those of the baseline in both laser groups. There was no serious persistent side effect after both laser treatments. Both fractional Er:YAG and long pulse Nd:YAG lasers show substantial clinical improvement of hand skin wrinkles with no serious side effects. However, combination treatment by these lasers along with the other modalities such as fat transfer could lead to better outcomes in hand rejuvenation. IRCT2016032020468N4.
Effect of Er:YAG Laser and Sandblasting in Recycling of Ceramic Brackets.
Yassaei, Soghra; Aghili, Hossein; Hosseinzadeh Firouzabadi, Azadeh; Meshkani, Hamidreza
2017-01-01
Introduction: This study was performed to determine the shear bond strength of rebonded mechanically retentive ceramic brackets after recycling with Erbium-Doped Yttrium Aluminum Garnet (Er:YAG) laser or sandblasting. Methods: Twenty-eight debonded ceramic brackets plus 14 intact new ceramic brackets were used in this study. Debonded brackets were randomly divided into 2 groups of 14. One group was treated by Er:YAG laser and the other with sandblasting. All the specimens were randomly bonded to 42 intact human upper premolars. The shear bond strength of all specimens was determined with a universal testing machine at a crosshead speed of 0.5 mm/min until bond failure occurred. The recycled bracket base surfaces were observed under a scanning electron microscope (SEM). Analysis of variance (ANOVA) and Tukey tests were used to compare the shear bond strength of the 3 groups. Fisher exact test was used to evaluate the differences in adhesive remnant index (ARI) scores. Results: The highest bond strength belonged to brackets recycled by Sandblasting (16.83 MPa). There was no significant difference between the shear bond strength of laser and control groups. SEM photographs showed differences in 2 recycling methods. The laser recycled bracket appeared to have as well-cleaned base as the new bracket. Although the sandblasted bracket photographs showed no remnant adhesives, remarkable micro-roughening of the base of the bracket was apparent. Conclusion: According to the results of this study, both Er:YAG laser and sandblasting were efficient to mechanically recondition retentive ceramic brackets. Also, Er:YAG laser did not change the design of bracket base while removing the remnant adhesives which might encourage its application in clinical practice.
Effect of Er:YAG Laser and Sandblasting in Recycling of Ceramic Brackets
Yassaei, Soghra; Aghili, Hossein; Hosseinzadeh Firouzabadi, Azadeh; Meshkani, Hamidreza
2017-01-01
Introduction: This study was performed to determine the shear bond strength of rebonded mechanically retentive ceramic brackets after recycling with Erbium-Doped Yttrium Aluminum Garnet (Er:YAG) laser or sandblasting. Methods: Twenty-eight debonded ceramic brackets plus 14 intact new ceramic brackets were used in this study. Debonded brackets were randomly divided into 2 groups of 14. One group was treated by Er:YAG laser and the other with sandblasting. All the specimens were randomly bonded to 42 intact human upper premolars. The shear bond strength of all specimens was determined with a universal testing machine at a crosshead speed of 0.5 mm/min until bond failure occurred. The recycled bracket base surfaces were observed under a scanning electron microscope (SEM). Analysis of variance (ANOVA) and Tukey tests were used to compare the shear bond strength of the 3 groups. Fisher exact test was used to evaluate the differences in adhesive remnant index (ARI) scores. Results: The highest bond strength belonged to brackets recycled by Sandblasting (16.83 MPa). There was no significant difference between the shear bond strength of laser and control groups. SEM photographs showed differences in 2 recycling methods. The laser recycled bracket appeared to have as well-cleaned base as the new bracket. Although the sandblasted bracket photographs showed no remnant adhesives, remarkable micro-roughening of the base of the bracket was apparent. Conclusion: According to the results of this study, both Er:YAG laser and sandblasting were efficient to mechanically recondition retentive ceramic brackets. Also, Er:YAG laser did not change the design of bracket base while removing the remnant adhesives which might encourage its application in clinical practice. PMID:28912939
Tewary, S; Arun, I; Ahmed, R; Chatterjee, S; Chakraborty, C
2017-11-01
In prognostic evaluation of breast cancer Immunohistochemical (IHC) markers namely, oestrogen receptor (ER) and progesterone receptor (PR) are widely used. The expert pathologist investigates qualitatively the stained tissue slide under microscope to provide the Allred score; which is clinically used for therapeutic decision making. Such qualitative judgment is time-consuming, tedious and more often suffers from interobserver variability. As a result, it leads to imprecise IHC score for ER and PR. To overcome this, there is an urgent need of developing a reliable and efficient IHC quantifier for high throughput decision making. In view of this, our study aims at developing an automated IHC profiler for quantitative assessment of ER and PR molecular expression from stained tissue images. We propose here to use CMYK colour space for positively and negatively stained cell extraction for proportion score. Also colour features are used for quantitative assessment of intensity scoring among the positively stained cells. Five different machine learning models namely artificial neural network, Naïve Bayes, K-nearest neighbours, decision tree and random forest are considered for learning the colour features using average red, green and blue pixel values of positively stained cell patches. Fifty cases of ER- and PR-stained tissues have been evaluated for validation with the expert pathologist's score. All five models perform adequately where random forest shows the best correlation with the expert's score (Pearson's correlation coefficient = 0.9192). In the proposed approach the average variation of diaminobenzidine (DAB) to nuclear area from the expert's score is found to be 7.58%, as compared to 27.83% for state-of-the-art ImmunoRatio software. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Alizadeh Oskoee, Parnian; Savadi Oskoee, Siavash; Rikhtegaran, Sahand; Pournaghi-Azar, Fatemeh; Gholizadeh, Sarah; Aleyasin, Yasaman; Kasrae, Shahin
2017-01-01
Introduction: Successful repair of composite restorations depends on a strong bond between the old composite and the repair composite. This study sought to assess the repair shear bond strength of aged silorane-based composite following surface treatment with Nd:YAG, Er,Cr:YSGG and CO2 lasers. Methods: Seventy-six Filtek silorane composite cylinders were fabricated and aged by 2 months of water storage at 37°C. The samples were randomly divided into 4 groups (n=19) of no surface treatment (group 1) and surface treatment with Er,Cr:YSGG (group 2), Nd:YAG (group 3) and CO2 (group 4) lasers. The repair composite was applied and the shear bond strength was measured. The data were analyzed using one-way analysis of variance (ANOVA) and Tukey posthoc test. Prior to the application of the repair composite, 2 samples were randomly selected from each group and topographic changes on their surfaces following laser irradiation were studied using a scanning electron microscope (SEM). Seventeen other samples were also fabricated for assessment of cohesive strength of composite. Results: The highest and the lowest mean bond strength values were 8.99 MPa and 6.69 MPa for Er,Cr:YSGG and control groups, respectively. The difference in the repair bond strength was statistically significant between the Er,Cr:YSGG and other groups. Bond strength of the control, Nd:YAG and CO2 groups was not significantly different. The SEM micrographs revealed variable degrees of ablation and surface roughness in laser-treated groups. Conclusion: Surface treatment with Er,Cr:YSGG laser significantly increase the repair bond strength of aged silorane-based composite resin. PMID:29071025
Shahabi, Sima; Chiniforush, Nasim; Bahramian, Hoda; Monzavi, Abbas; Baghalian, Ali; Kharazifard, Mohammad Javad
2013-01-01
The purpose of this study was to evaluate the effect of Er:YAG and Er,Cr:YSGG laser on tensile bond strength of composite resin to dentine in comparison with bur-prepared cavities. Fifteen extracted caries-free human third molars were selected. The teeth were cut at a level below the occlusal pit and fissure plan and randomly divided into three groups. Five cavities were prepared by diamond bur, five cavities prepared by Er:YAG laser, and the other group prepared by Er,Cr:YSGG laser. Then, all the cavities were restored by composite resin. The teeth were sectioned longitudinally with Isomet and the specimens prepared in dumbbelled shape (n = 36). The samples were attached to special jigs, and the tensile bond strength of the three groups was measured by universal testing machine at a speed of 0.5 mm/min. The results of the three groups were analyzed with one-way ANOVA and Tamhane test. The means and standard deviations of tensile bond strength of bur-cut, Er:YAG laser-ablated, and Er,Cr:YSGG laser-ablated dentine were 5.04 ± 0.93, 13.37 ± 3.87, and 4.85 ± 0.93 MPa, respectively. There is little difference in tensile bond strength of composite resin in Er,Cr:YSGG lased-prepared cavities in comparison with bur-prepared cavities, but the Er:YAG laser group showed higher bond strength than the other groups.
The random fractional matching problem
NASA Astrophysics Data System (ADS)
Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele
2018-05-01
We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.
Resistance and Security Index of Networks: Structural Information Perspective of Network Security
NASA Astrophysics Data System (ADS)
Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng
2016-06-01
Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.
Resistance and Security Index of Networks: Structural Information Perspective of Network Security.
Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng
2016-06-03
Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.
Resistance and Security Index of Networks: Structural Information Perspective of Network Security
Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng
2016-01-01
Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks. PMID:27255783
Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual, appendix 2
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
The FORTRAN programs RANDOM3 and RANDOM4 are documented. They are based on fatigue strength reduction, using a probabilistic constitutive model. They predict the random lifetime of an engine component to reach a given fatigue strength. Included in this user manual are details regarding the theoretical backgrounds of RANDOM3 and RANDOM4. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B and C include photocopies of the actual computer printout corresponding to the sample problems. Appendices D and E detail the IMSL, Version 10(1), subroutines and functions called by RANDOM3 and RANDOM4 and SAS/GRAPH(2) programs that can be used to plot both the probability density functions (p.d.f.) and the cumulative distribution functions (c.d.f.).
Chiu, Bernard; Chen, Weifu; Cheng, Jieyu
2016-12-01
Rapid progression in total plaque area and volume measured from ultrasound images has been shown to be associated with an elevated risk of cardiovascular events. Since atherosclerosis is focal and predominantly occurring at the bifurcation, biomarkers that are able to quantify the spatial distribution of vessel-wall-plus-plaque thickness (VWT) change may allow for more sensitive detection of treatment effect. The goal of this paper is to develop simple and sensitive biomarkers to quantify the responsiveness to therapies based on the spatial distribution of VWT-Change on the entire 2D carotid standardized map previously described. Point-wise VWT-Changes computed for each patient were reordered lexicographically to a high-dimensional data node in a graph. A graph-based random walk framework was applied with the novel Weighted Cosine (WCos) similarity function introduced, which was tailored for quantification of responsiveness to therapy. The converging probability of each data node to the VWT regression template in the random walk process served as a scalar descriptor for VWT responsiveness to treatment. The WCos-based biomarker was 14 times more sensitive than the mean VWT-Change in discriminating responsive and unresponsive subjects based on the p-values obtained in T-tests. The proposed framework was extended to quantify where VWT-Change occurred by including multiple VWT-Change distribution templates representing focal changes at different regions. Experimental results show that the framework was effective in classifying carotid arteries with focal VWT-Change at different locations and may facilitate future investigations to correlate risk of cardiovascular events with the location where focal VWT-Change occurs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Blood pressure variability of two ambulatory blood pressure monitors.
Kallem, Radhakrishna R; Meyers, Kevin E C; Cucchiara, Andrew J; Sawinski, Deirdre L; Townsend, Raymond R
2014-04-01
There are no data on the evaluation of blood pressure (BP) variability comparing two ambulatory blood pressure monitoring monitors worn at the same time. Hence, this study was carried out to compare variability of BP in healthy untreated adults using two ambulatory BP monitors worn at the same time over an 8-h period. An Accutorr device was used to measure office BP in the dominant and nondominant arms of 24 participants.Simultaneous 8-h BP and heart rate data were measured in 24 untreated adult volunteers by Mobil-O-Graph (worn for an additional 16 h after removing the Spacelabs monitor) and Spacelabs with both random (N=12) and nonrandom (N=12) assignment of each device to the dominant arm. Average real variability (ARV), SD, coefficient of variation, and variation independent of mean were calculated for systolic blood pressure, diastolic blood pressure, mean arterial pressure, and pulse pressure (PP). Whether the Mobil-O-Graph was applied to the dominant or the nondominant arm, the ARV of mean systolic (P=0.003 nonrandomized; P=0.010 randomized) and PP (P=0.009 nonrandomized; P=0.005 randomized) remained significantly higher than the Spacelabs device, whereas the ARV of the mean arterial pressure was not significantly different. The average BP readings and ARVs for systolic blood pressure and PP obtained by the Mobil-O-Graph were considerably higher for the daytime than the night-time. Given the emerging interest in the effect of BP variability on health outcomes, the accuracy of its measurement is important. Our study raises concerns about the accuracy of pooling international ambulatory blood pressure monitoring variability data using different devices.
Akama, Hiroyuki; Miyake, Maki; Jung, Jaeyoung; Murphy, Brian
2015-01-01
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
Machine learning in a graph framework for subcortical segmentation
NASA Astrophysics Data System (ADS)
Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek
2017-02-01
Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).
Min, Yu-Sun; Chang, Yongmin; Park, Jang Woo; Lee, Jong-Min; Cha, Jungho; Yang, Jin-Ju; Kim, Chul-Hyun; Hwang, Jong-Moon; Yoo, Ji-Na; Jung, Tae-Du
2015-06-01
To investigate the global functional reorganization of the brain following spinal cord injury with graph theory based approach by creating whole brain functional connectivity networks from resting state-functional magnetic resonance imaging (rs-fMRI), characterizing the reorganization of these networks using graph theoretical metrics and to compare these metrics between patients with spinal cord injury (SCI) and age-matched controls. Twenty patients with incomplete cervical SCI (14 males, 6 females; age, 55±14.1 years) and 20 healthy subjects (10 males, 10 females; age, 52.9±13.6 years) participated in this study. To analyze the characteristics of the whole brain network constructed with functional connectivity using rs-fMRI, graph theoretical measures were calculated including clustering coefficient, characteristic path length, global efficiency and small-worldness. Clustering coefficient, global efficiency and small-worldness did not show any difference between controls and SCIs in all density ranges. The normalized characteristic path length to random network was higher in SCI patients than in controls and reached statistical significance at 12%-13% of density (p<0.05, uncorrected). The graph theoretical approach in brain functional connectivity might be helpful to reveal the information processing after SCI. These findings imply that patients with SCI can build on preserved competent brain control. Further analyses, such as topological rearrangement and hub region identification, will be needed for better understanding of neuroplasticity in patients with SCI.
The use of control charts by laypeople and hospital decision-makers for guiding decision making.
Schmidtke, K A; Watson, D G; Vlaev, I
2017-07-01
Graphs presenting healthcare data are increasingly available to support laypeople and hospital staff's decision making. When making these decisions, hospital staff should consider the role of chance-that is, random variation. Given random variation, decision-makers must distinguish signals (sometimes called special-cause data) from noise (common-cause data). Unfortunately, many graphs do not facilitate the statistical reasoning necessary to make such distinctions. Control charts are a less commonly used type of graph that support statistical thinking by including reference lines that separate data more likely to be signals from those more likely to be noise. The current work demonstrates for whom (laypeople and hospital staff) and when (treatment and investigative decisions) control charts strengthen data-driven decision making. We present two experiments that compare people's use of control and non-control charts to make decisions between hospitals (funnel charts vs. league tables) and to monitor changes across time (run charts with control lines vs. run charts without control lines). As expected, participants more accurately identified the outlying data using a control chart than using a non-control chart, but their ability to then apply that information to more complicated questions (e.g., where should I go for treatment?, and should I investigate?) was limited. The discussion highlights some common concerns about using control charts in hospital settings.
INTRAPUPAL TEMPERATURE VARIATION DURING ER,CR:YSGG ENAMEL IRRADIATION ON CARIES PREVENTION
de Freitas, Patrícia Moreira; Soares-Geraldo, Débora; Biella-Silva, Ana Cristina; Silva, Amanda Verna; da Silveira, Bruno Lopes; Eduardo, Carlos de Paula
2008-01-01
Studies have shown the cariostatic effect of Er,Cr:YSGG (2.78 μm) laser irradiation on human enamel and have suggested its use on caries prevention. However there are still no reports on the intrapulpal temperature increase during enamel irradiation using parameters for caries prevention. The aim of this in vitro study was to evaluate the temperature variation in the pulp chamber during human enamel irradiation with Er,Cr:YSGG laser at different energy densities. Fifteen enamel blocks obtained from third molars (3 x 3 x 3 mm) were randomly assigned to 3 groups (n=5): G1 – Er,Cr:YSGG laser 0.25 W, 20 Hz, 2.84 J/cm2, G2 – Er,Cr:YSGG laser 0.50 W, 20 Hz, 5.68 J/cm2, G3 – Er,Cr:YSGG laser 0.75 W, 20 Hz, 8.52 J/cm2. During enamel irradiation, two thermocouples were fixed in the inner surface of the specimens and a thermal conducting paste was used. One-way ANOVA did not show statistically significant difference among the experimental groups (α=0.05). There was intrapulpal temperature variation ≤0.1°C for all irradiation parameters. In conclusion, under the tested conditions, the use of Er,Cr:YSGG laser with parameters set for caries prevention lead to an acceptable temperature increase in the pulp chamber. PMID:19089198
Hidaka, Brandon H; Kimler, Bruce F; Fabian, Carol J; Carlson, Susan E
2017-02-01
We reported an association between cytologic atypia, a reversible biomarker of breast cancer risk, and lower omega-3/omega-6 fatty acid ratio in blood and breast tissue. Our goal was to develop and validate a dietary pattern index in this high-risk sample of U.S. women, and test its capacity to predict incidence in a nested case-control cohort of Canadian women from a randomized trial of a low-fat dietary intervention for primary prevention of breast cancer. Food intake was measured by food frequency questionnaire in the U.S. sample (n = 65) and multiple dietary recalls in the Canadian sample (n = 220 cases; 440 controls). Principal component analysis identified a dietary pattern associated with atypia. We measured differences among dietary pattern tertiles in (a) fatty acid composition in blood lipids and breast tissue in the U.S. sample, and (b) risk of breast cancer subtypes in the Canadian cohort. Registered under ClinicalTrials.gov Identifier: NCT00148057. A Modern diet was characterized as consuming more grains, dairy, and sugar and less vegetables, fish and poultry; these women had lower tissue omega-3 fatty acids and higher omega-6 and trans fatty acids. The low-fat intervention increased the likelihood of a Modern diet after randomization. A Modern diet at baseline and post-randomization was associated with estrogen-receptor negative (ER-) breast cancer risk among those at least 160 cm tall. A Traditional diet (the reciprocal of Modern) at baseline was associated with lower ER-positive (ER+) risk in the comparison group, but not the low-fat intervention group. A Modern diet (high in grains, dairy, and sugar and low in vegetables, fish, and poultry) is associated with ER- breast cancer risk among taller women. Recommending dietary fat reduction may have untoward effects on breast cancer risk. Copyright © 2016 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Dimitriadis, S I; Laskaris, N A; Tzelepi, A; Economou, G
2012-05-01
There is growing interest in studying the association of functional connectivity patterns with particular cognitive tasks. The ability of graphs to encapsulate relational data has been exploited in many related studies, where functional networks (sketched by different neural synchrony estimators) are characterized by a rich repertoire of graph-related metrics. We introduce commute times (CTs) as an alternative way to capture the true interplay between the nodes of a functional connectivity graph (FCG). CT is a measure of the time taken for a random walk to setout and return between a pair of nodes on a graph. Its computation is considered here as a robust and accurate integration, over the FCG, of the individual pairwise measurements of functional coupling. To demonstrate the benefits from our approach, we attempted the characterization of time evolving connectivity patterns derived from EEG signals recorded while the subject was engaged in an eye-movement task. With respect to standard ways, which are currently employed to characterize connectivity, an improved detection of event-related dynamical changes is noticeable. CTs appear to be a promising technique for deriving temporal fingerprints of the brain's dynamic functional organization.
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Artistic image analysis using graph-based learning approaches.
Carneiro, Gustavo
2013-08-01
We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
Charting the Replica Symmetric Phase
NASA Astrophysics Data System (ADS)
Coja-Oghlan, Amin; Efthymiou, Charilaos; Jaafari, Nor; Kang, Mihyun; Kapetanopoulos, Tobias
2018-02-01
Diluted mean-field models are spin systems whose geometry of interactions is induced by a sparse random graph or hypergraph. Such models play an eminent role in the statistical mechanics of disordered systems as well as in combinatorics and computer science. In a path-breaking paper based on the non-rigorous `cavity method', physicists predicted not only the existence of a replica symmetry breaking phase transition in such models but also sketched a detailed picture of the evolution of the Gibbs measure within the replica symmetric phase and its impact on important problems in combinatorics, computer science and physics (Krzakala et al. in Proc Natl Acad Sci 104:10318-10323, 2007). In this paper we rigorise this picture completely for a broad class of models, encompassing the Potts antiferromagnet on the random graph, the k-XORSAT model and the diluted k-spin model for even k. We also prove a conjecture about the detection problem in the stochastic block model that has received considerable attention (Decelle et al. in Phys Rev E 84:066106, 2011).
Naming Game with Multiple Hearers
NASA Astrophysics Data System (ADS)
Li, Bing; Chen, Guanrong; Chow, Tommy W. S.
2013-05-01
A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.
Mean-field equations for neuronal networks with arbitrary degree distributions.
Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Mean-field equations for neuronal networks with arbitrary degree distributions
NASA Astrophysics Data System (ADS)
Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Insulin protects against hepatic damage postburn.
Jeschke, Marc G; Kraft, Robert; Song, Juquan; Gauglitz, Gerd G; Cox, Robert A; Brooks, Natasha C; Finnerty, Celeste C; Kulp, Gabriela A; Herndon, David N; Boehning, Darren
2011-01-01
Burn injury causes hepatic dysfunction associated with endoplasmic reticulum (ER) stress and induction of the unfolded protein response (UPR). ER stress/UPR leads to hepatic apoptosis and activation of the Jun-N-terminal kinase (JNK) signaling pathway, leading to vast metabolic alterations. Insulin has been shown to attenuate hepatic damage and to improve liver function. We therefore hypothesized that insulin administration exerts its effects by attenuating postburn hepatic ER stress and subsequent apoptosis. Male Sprague Dawley rats received a 60% total body surface area (TBSA) burn injury. Animals were randomized to receive saline (controls) or insulin (2.5 IU/kg q. 24 h) and euthanized at 24 and 48 h postburn. Burn injury induced dramatic changes in liver structure and function, including induction of the ER stress response, mitochondrial dysfunction, hepatocyte apoptosis, and up-regulation of inflammatory mediators. Insulin decreased hepatocyte caspase-3 activation and apoptosis significantly at 24 and 48 h postburn. Furthermore, insulin administration decreased ER stress significantly and reversed structural and functional changes in hepatocyte mitochondria. Finally, insulin attenuated the expression of inflammatory mediators IL-6, MCP-1, and CINC-1. Insulin alleviates burn-induced ER stress, hepatocyte apoptosis, mitochondrial abnormalities, and inflammation leading to improved hepatic structure and function significantly. These results support the use of insulin therapy after traumatic injury to improve patient outcomes.
Insulin Protects against Hepatic Damage Postburn
Jeschke, Marc G; Kraft, Robert; Song, Juquan; Gauglitz, Gerd G; Cox, Robert A; Brooks, Natasha C; Finnerty, Celeste C; Kulp, Gabriela A; Herndon, David N; Boehning, Darren
2011-01-01
Burn injury causes hepatic dysfunction associated with endoplasmic reticulum (ER) stress and induction of the unfolded protein response (UPR). ER stress/UPR leads to hepatic apoptosis and activation of the Jun-N-terminal kinase (JNK) signaling pathway, leading to vast metabolic alterations. Insulin has been shown to attenuate hepatic damage and to improve liver function. We therefore hypothesized that insulin administration exerts its effects by attenuating postburn hepatic ER stress and subsequent apoptosis. Male Sprague Dawley rats received a 60% total body surface area (TBSA) burn injury. Animals were randomized to receive saline (controls) or insulin (2.5 IU/kg q. 24 h) and euthanized at 24 and 48 h postburn. Burn injury induced dramatic changes in liver structure and function, including induction of the ER stress response, mitochondrial dysfunction, hepatocyte apoptosis, and up-regulation of inflammatory mediators. Insulin decreased hepatocyte caspase-3 activation and apoptosis significantly at 24 and 48 h postburn. Furthermore, insulin administration decreased ER stress significantly and reversed structural and functional changes in hepatocyte mitochondria. Finally, insulin attenuated the expression of inflammatory mediators IL-6, MCP-1, and CINC-1. Insulin alleviates burn-induced ER stress, hepatocyte apoptosis, mitochondrial abnormalities, and inflammation leading to improved hepatic structure and function significantly. These results support the use of insulin therapy after traumatic injury to improve patient outcomes. PMID:21267509
Influence of microgravity on root-cap regeneration and the structure of columella cells in Zea mays
NASA Technical Reports Server (NTRS)
Moore, R.; McClelen, C. E.; Fondren, W. M.; Wang, C. L.
1987-01-01
We launched imbibed seeds and seedlings of Zea mays into outer space aboard the space shuttle Columbia to determine the influence of microgravity on 1) root-cap regeneration, and 2) the distribution of amyloplasts and endoplasmic reticulum (ER) in the putative statocytes (i.e., columella cells) of roots. Decapped roots grown on Earth completely regenerated their caps within 4.8 days after decapping, while those grown in microgravity did not regenerate caps. In Earth-grown seedlings, the ER was localized primarily along the periphery of columella cells, and amyloplasts sedimented in response to gravity to the lower sides of the cells. Seeds germinated on Earth and subsequently launched into outer space had a distribution of ER in columella cells similar to that of Earth-grown controls, but amyloplasts were distributed throughout the cells. Seeds germinated in outer space were characterized by the presence of spherical and ellipsoidal masses of ER and randomly distributed amyloplasts in their columella cells. These results indicate that 1) gravity is necessary for regeneration of the root cap, 2) columella cells can maintain their characteristic distribution of ER in microgravity only if they are exposed previously to gravity, and 3) gravity is necessary to distribute the ER in columella cells of this cultivar of Z. mays.
The effect of choir formation on the acoustical attributes of the singing voice
NASA Astrophysics Data System (ADS)
Atkinson, Debra Sue
Research shows that many things can influence choral tone and choral blend. Some of these are vowel uniformity, vibrato, choral formation, strategic placement of singers, and spacing between singers. This study sought to determine the effect that changes in choral formation and spacing between singers would have on four randomly selected voices of an ensemble as revealed through long-term average spectra (LTAS) of the individual singers. All members of the ensemble were given the opportunity to express their preferences for each of the choral formations and the four randomly selected choristers were asked specific questions regarding the differences between choral singing and solo singing. The results indicated that experienced singers preferred singing in a mixed-spread choral formation. However, the graphs of the choral excerpts as compared to the solo recordings revealed that the choral graphs for the soprano and bass were very similar to the graphs of their solos, but the graphs of the tenor and the alto were different from their solo graphs. It is obvious from the results of this study that the four selected singers did sing with slightly different techniques in the choral formations than they did while singing their solos. The members of this ensemble were accustomed to singing in many different formations. Therefore, it was easy for them to consciously think about how they sang in each of the four formations (mixed-close, mixed-spread, sectional-close, and sectional-spread) and answer the questionnaire accordingly. This would not be as easy for a group that never changed choral formations. Therefore, the results of this study cannot be generalized to choirs who only sing in sectional formation. As researchers learn more about choral acoustics and the effects of choral singing on the voice, choral conductors will be able to make better decisions about the methods used to achieve their desired choral blend. It is up to the choral conductors to glean the knowledge from the research that is taking place and use it for the betterment of choral music.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Ant-inspired density estimation via random walks
Musco, Cameron; Su, Hsin-Hao
2017-01-01
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146
Intratumor Heterogeneity of the Estrogen Receptor and the Long-term Risk of Fatal Breast Cancer.
Lindström, Linda S; Yau, Christina; Czene, Kamila; Thompson, Carlie K; Hoadley, Katherine A; Van't Veer, Laura J; Balassanian, Ron; Bishop, John W; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Hasteh, Farnaz; Krings, Gregor; Lin, Fritz; Zhang, Yanhong; Nordenskjöld, Bo; Stål, Olle; Benz, Christopher C; Fornander, Tommy; Borowsky, Alexander D; Esserman, Laura J
2018-01-19
Breast cancer patients with estrogen receptor (ER)-positive disease have a continuous long-term risk for fatal breast cancer, but the biological factors influencing this risk are unknown. We aimed to determine whether high intratumor heterogeneity of ER predicts an increased long-term risk (25 years) of fatal breast cancer. The STO-3 trial enrolled 1780 postmenopausal lymph node-negative breast cancer patients randomly assigned to receive adjuvant tamoxifen vs not. The fraction of cancer cells for each ER intensity level was scored by breast cancer pathologists, and intratumor heterogeneity of ER was calculated using Rao's quadratic entropy and categorized into high and low heterogeneity using a predefined cutoff at the second tertile (67%). Long-term breast cancer-specific survival analyses by intra-tumor heterogeneity of ER were performed using Kaplan-Meier and multivariable Cox proportional hazard modeling adjusting for patient and tumor characteristics. A statistically significant difference in long-term survival by high vs low intratumor heterogeneity of ER was seen for all ER-positive patients (P < .001) and for patients with luminal A subtype tumors (P = .01). In multivariable analyses, patients with high intratumor heterogeneity of ER had a twofold increased long-term risk as compared with patients with low intratumor heterogeneity (ER-positive: hazard ratio [HR] = 1.98, 95% confidence interval [CI] = 1.31 to 3.00; luminal A subtype tumors: HR = 2.43, 95% CI = 1.18 to 4.99). Patients with high intratumor heterogeneity of ER had an increased long-term risk of fatal breast cancer. Interestingly, a similar long-term risk increase was seen in patients with luminal A subtype tumors. Our findings suggest that intratumor heterogeneity of ER is an independent long-term prognosticator with potential to change clinical management, especially for patients with luminal A tumors. © The Author(s) 2018. Published by Oxford University Press.
Evaluation of Dalfampridine Extended Release 5 and 10 mg in Multiple Sclerosis
Yapundich, Robert; Applebee, Angela; Bethoux, Francois; Goldman, Myla D.; Hutton, George J.; Mass, Michele; Pardo, Gabriel; Klingler, Michael; Henney, Herbert R.; Carrazana, Enrique J.
2015-01-01
Background: Dalfampridine extended-release (ER) tablets, 10 mg twice daily, have been shown to improve walking in people with multiple sclerosis. We evaluated the safety and efficacy of dalfampridine-ER 5 mg compared with 10 mg. Methods: Patients were randomized to double-blind treatment with twice-daily dalfampridine-ER tablets, 5 mg (n = 144) or 10 mg (n = 143), or placebo (n = 143) for 4 weeks. Primary efficacy endpoint was change from baseline walking speed by the Timed 25-Foot Walk 3 to 4 hours after the last dose. At 40% of sites, 2-week change from baseline walking distance was measured by the 6-Minute Walk test. Results: At 4 weeks, walking speed changes from baseline were 0.363, 0.423, and 0.478 ft/s (placebo, dalfampridine-ER 5 mg, and dalfampridine-ER 10 mg, respectively [P = NS]). Post hoc analysis of average changes between pretreatment and on-treatment showed that relative to placebo, only dalfampridine-ER 10 mg demonstrated a significant increase in walking speed (mean ± SE): 0.443 ± 0.042 ft/s versus 0.303 ± 0.038 ft/s (P = .014). Improvement in 6-Minute Walk distance was significantly greater with dalfampridine-ER 10 mg (128.6 ft, P = .014) but not with 5 mg (76.8 ft, P = .308) relative to placebo (41.7 ft). Adverse events were consistent with previous studies. No seizures were reported. Conclusions: Dalfampridine-ER 5 and 10 mg twice daily did not demonstrate efficacy on the planned endpoint. Post hoc analyses demonstrated significant increases in walking speed relative to placebo with dalfampridine-ER 10 mg. No new safety signals were observed. PMID:26052259
Shen, Chuanan; Li, Dawei; Wang, Xiaoteng
2017-01-01
Severe burns are typically followed by hypermetabolism characterized by significant muscle wasting, which causes considerable morbidity and mortality. The aim of the present study was to explore the underlying mechanisms of skeletal muscle damage/wasting post-burn. Rats were randomized to the sham, sham+4-phenylbutyrate (4-PBA, a pharmacological chaperone promoting endoplasmic reticulum (ER) folding/trafficking, commonly considered as an inhibitor of ER), burn (30% total body surface area), and burn+4-PBA groups; and sacrificed at 1, 4, 7, 14 days after the burn injury. Tibial anterior muscle was harvested for transmission electron microscopy, calcium imaging, gene expression and protein analysis of ER stress / ubiquitin-proteasome system / autophagy, and calpain activity measurement. The results showed that ER stress markers were increased in the burn group compared with the sham group, especially at post-burn days 4 and 7, which might consequently elevate cytoplasmic calcium concentration, promote calpain production as well as activation, and cause skeletal muscle damage/wasting of TA muscle after severe burn injury. Interestingly, treatment with 4-PBA prevented burn-induced ER swelling and altered protein expression of ER stress markers and calcium release, attenuating calpain activation and skeletal muscle damage/wasting after severe burn injury. Atrogin-1 and LC3-II/LC3-I ratio were also increased in the burn group compared with the sham group, while MuRF-1 remained unchanged; 4-PBA decreased atrogin-1 in the burn group. Taken together, these findings suggested that severe burn injury induces ER stress, which in turns causes calpain activation. ER stress and subsequent activated calpain play a critical role in skeletal muscle damage/wasting in burned rats. PMID:29028830
Swift, Sibyl N; Swift, Joshua M; Bloomfield, Susan A
2014-12-01
Estrogen receptor-α (ER-α) is an important mediator of the bone response to mechanical loading. We sought to determine whether restricting dietary energy intake by 40% limits the bone formation rate (BFR) response to mechanical loading (LOAD) by downregulating ER-α-expressing osteocytes, or osteoblasts, or both. Female rats (n = 48, 7 mo old) were randomized to ADLIB-SHAM and ADLIB-LOAD groups fed AIN-93M purified diet ad libitum or to ER40-SHAM and ER40-LOAD groups fed modified AIN-93M with 40% less energy (100% of all other nutrients). After 12 wk, LOAD rats were subjected to a muscle contraction protocol three times every third day. ER40 produced lower proximal tibia bone volume (-22%), trabecular thickness (-14%), and higher trabecular separation (+127%) in SHAM but not LOAD rats. ER40 rats exhibited reductions in mineral apposition rate, but not percent mineralizing surface or BFR. LOAD induced similar relative increases in these kinetic measures of osteoblast activity/recruitment in both diet groups., but absolute values for ER40 LOAD rats were lower vs. ADLIB-LOAD. There were fourfold and eightfold increases in proportion of estrogen receptor-α protein-positive osteoblast and osteocytes, respectively, in LOAD vs. SHAM rats, with no effect of ER40. These data suggest that a brief period of mechanical loading significantly affects estrogen receptor-α in cancellous bone osteoblasts and osteocytes. Chronic energy restriction does result in lower absolute values in indices of osteoblast activity after mechanical loading, but not by a smaller increment relative to unloaded bones; this change is not explained by an associated downregulation of ER-α in osteoblasts or osteocytes.
Dalai, Shebani Sethi; Adler, Sarah; Najarian, Thomas; Safer, Debra Lynn
2018-01-01
Bulimia nervosa (BN) and binge eating disorder (BED) are associated with severe psychological and medical consequences. Current therapies are limited, leaving up to 50% of patients symptomatic despite treatment, underscoring the need for additional treatment options. Qsymia, an FDA-approved medication for obesity, combines phentermine and topiramate ER. Topiramate has demonstrated efficacy for both BED and BN, but limited tolerability. Phentermine is FDA-approved for weight loss. A rationale for combined phentermine/topiramate for BED and BN is improved tolerability and efficacy. While a prior case series exploring Qsymia for BED showed promise, randomized studies are needed to evaluate Qsymia's safety and efficacy when re-purposed in eating disorders. We present a study protocol for a Phase I/IIa single-center, prospective, double-blinded, randomized, crossover trial examining safety and preliminary efficacy of Qsymia for BED and BN. Adults with BED (n=15) or BN (n=15) are randomized 1:1 to receive 12weeks Qsymia (phentermine/topiramate ER, 3.75mg/23mg-15mg/92mg) or placebo, followed by 2-weeks washout and 12-weeks crossover, where those on Qsymia receive placebo and vice versa. Subsequently participants receive 8weeks follow-up off study medications. The primary outcome is the number of binge days/week measured by EDE. Secondary outcomes include average number of binge episodes, percentage abstinence from binge eating, and changes in weight/vitals, eating psychopathology, and mood. To our knowledge this is the first randomized, double-blind protocol investigating the safety and efficacy of phentermine/topiramate in BED and BN. We highlight the background and rationale for this study, including the advantages of a crossover design. Clinicaltrials.gov identifier NCT02553824 registered on 9/17/2015. https://clinicaltrials.gov/ct2/show/NCT02553824. Copyright © 2017 Elsevier Inc. All rights reserved.
The RNASeq-er API-a gateway to systematically updated analysis of public RNA-seq data.
Petryszak, Robert; Fonseca, Nuno A; Füllgrabe, Anja; Huerta, Laura; Keays, Maria; Tang, Y Amy; Brazma, Alvis
2017-07-15
The exponential growth of publicly available RNA-sequencing (RNA-Seq) data poses an increasing challenge to researchers wishing to discover, analyse and store such data, particularly those based in institutions with limited computational resources. EMBL-EBI is in an ideal position to address these challenges and to allow the scientific community easy access to not just raw, but also processed RNA-Seq data. We present a Web service to access the results of a systematically and continually updated standardized alignment as well as gene and exon expression quantification of all public bulk (and in the near future also single-cell) RNA-Seq runs in 264 species in European Nucleotide Archive, using Representational State Transfer. The RNASeq-er API (Application Programming Interface) enables ontology-powered search for and retrieval of CRAM, bigwig and bedGraph files, gene and exon expression quantification matrices (Fragments Per Kilobase Of Exon Per Million Fragments Mapped, Transcripts Per Million, raw counts) as well as sample attributes annotated with ontology terms. To date over 270 00 RNA-Seq runs in nearly 10 000 studies (1PB of raw FASTQ data) in 264 species in ENA have been processed and made available via the API. The RNASeq-er API can be accessed at http://www.ebi.ac.uk/fg/rnaseq/api . The commands used to analyse the data are available in supplementary materials and at https://github.com/nunofonseca/irap/wiki/iRAP-single-library . rnaseq@ebi.ac.uk ; rpetry@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
The Value of Information in Distributed Decision Networks
2016-03-04
formulation, and then we describe the various results at- tained. 1 Mathematical description of Distributed Decision Network un- der Information...Constraints We now define a mathematical framework for networks. Let G = (V,E) be an undirected random network (graph) drawn from a known distribution pG, 1
Cambell, L M; Ross, J R; Goves, J R; Lees, C T; McCullagh, A; Barnes, P; Timerick, S J; Richardson, P D
1989-12-01
Hypertensive patients received a beta-blocker plus placebo once daily for 4 weeks. If their diastolic blood pressure (DBP) was then 95-115 mm Hg, they were randomized to receive, in addition to the beta-blocker, placebo (n = 36), felodipine-extended release (ER) 10 mg (n = 36), or felodipine-ER 20 mg (n = 37) in a 4-week double-blind parallel-group trial. All medication was administered once daily and, when BP was measured 24 h after the last dose, felodipine-ER 10 mg reduced DBP by 14 +/- 9 mm Hg (mean +/- SD) from a mean of 103 mm Hg and felodipine-ER 20 mg reduced DBP by 18 +/- 9 mm Gg from 101 mm Hg. The reductions in DBP with both doses of felodipine were greater than reductions with placebo (5 +/- 8 mm Hg, from 102 mm Hg--both p less than 0.001). At the end of the study, 21% of patients receiving placebo had a DBP less than or equal to 90 mm Hg. In contrast, 69% of patients receiving felodipine-ER 10 mg and 82% receiving 20 mg attained this level. More than 90% of patients receiving 10 mg felodipine-ER once daily had a reduction in DBP greater than 5 mm Hg 24 h postdose. Felodipine-ER was well tolerated. Felodipine-ER once daily is an effective antihypertensive drug for patients who require therapy in addition to a beta-blocker; the tolerability in this study was good, and a starting dose greater than 10 mg once daily is not indicated.
Increasing the Elective Endovascular to Open Repair Ratio of Popliteal Artery Aneurysm.
Wrede, Axel; Wiberg, Frans; Acosta, Stefan
2018-02-01
Open repair (OR) for popliteal artery aneurysm (PAA) has recently been challenged by endovascular repair (ER) as the primary choice of treatment. The aim of the present study was to evaluate time trends in treatment modality and compare outcomes between OR and ER among electively operated patients after start of screening in 2010 for abdominal aortic aneurysm (AAA), a disease highly associated with PAA. Between January 1, 2009, and April 30, 2017, 102 procedures and 36 acute and 66 elective repairs for PAA were identified. Over time, a trend ( P = .089) for an increasing elective to acute repair ratio of PAA and an increase in elective ER to OR ratio ( P = .003) was found. Among electively repaired PAAs, the ER group was older ( P = .047) and had a higher ankle-brachial index (ABI; P = .044). The ER group had fewer wound infections ( P = .003), fewer major bleeding complications ( P = .046), and shorter in-hospital stay ( P < .001). After 1 year of follow-up, the ER group had a higher rate of major amputations ( P = .037). Amputation-free survival at the end of follow-up did not differ between groups ( P = .68). Among the 17 patients with PAA eligible for AAA screening, 4 (24%) were diagnosed with PAA through the screening program of AAA. The epidemiology of elective repair of PAA has changed toward increased ER, although ER showed a higher rate of major amputations at 1 year. Confounding was considerable and a randomized trial is needed for evaluation of the best therapeutic option.
Cognitive Emotion Regulation and Written Exposure Therapy for Posttraumatic Stress Disorder
Wisco, Blair E.; Sloan, Denise M.; Marx, Brian P.
2014-01-01
We examined the extent to which cognitive emotion-regulation (ER) strategies moderated posttraumatic stress disorder (PTSD) treatment outcome among 40 motor vehicle accident survivors. Participants were randomly assigned to either a brief written exposure therapy (WET) condition or a waitlist condition and were assessed pre- and posttreatment and at a 3-month follow-up. Positive-reappraisal and putting-into-perspective strategies at baseline interacted with condition to predict symptom change over time. Both strategies predicted greater reductions in PTSD in the waitlist group, suggesting facilitation of natural recovery. However, positive reappraisal was associated with smaller reductions in PTSD in the WET group, suggesting that this strategy may interfere with treatment. Treatment also reduced use of the maladaptive ER strategy of rumination. These results provide evidence that putting-into-perspective and positive-reappraisal strategies are beneficial in the absence of treatment and that certain types of ER strategies may reduce response to WET, highlighting the importance of future research examining ER during treatment. PMID:24482755
Aman, Michael G; Findling, Robert L; Hardan, Antonio Y; Hendren, Robert L; Melmed, Raun D; Kehinde-Nelson, Ola; Hsu, Hai-An; Trugman, Joel M; Palmer, Robert H; Graham, Stephen M; Gage, Allyson T; Perhach, James L; Katz, Ephraim
2017-06-01
Abnormal glutamatergic neurotransmission is implicated in the pathophysiology of autism spectrum disorder (ASD). In this study, the safety, tolerability, and efficacy of the glutamatergic N-methyl-d-aspartate (NMDA) receptor antagonist memantine (once-daily extended-release [ER]) were investigated in children with autism in a randomized, placebo-controlled, 12 week trial and a 48 week open-label extension. A total of 121 children 6-12 years of age with Diagnostic and Statistical Manual of Mental Disorders, 4th ed., Text Revision (DSM-IV-TR)-defined autistic disorder were randomized (1:1) to placebo or memantine ER for 12 weeks; 104 children entered the subsequent extension trial. Maximum memantine doses were determined by body weight and ranged from 3 to 15 mg/day. There was one serious adverse event (SAE) (affective disorder, with memantine) in the 12 week study and one SAE (lobar pneumonia) in the 48 week extension; both were deemed unrelated to treatment. Other AEs were considered mild or moderate and most were deemed not related to treatment. No clinically significant changes occurred in clinical laboratory values, vital signs, or electrocardiogram (ECG). There was no significant between-group difference on the primary efficacy outcome of caregiver/parent ratings on the Social Responsiveness Scale (SRS), although an improvement over baseline at Week 12 was observed in both groups. A trend for improvement at the end of the 48 week extension was observed. No improvements in the active group were observed on any of the secondary end-points, with one communication measure showing significant worsening with memantine compared with placebo (p = 0.02) after 12 weeks. This trial did not demonstrate clinical efficacy of memantine ER in autism; however, the tolerability and safety data were reassuring. Our results could inform future trial design in this population and may facilitate the investigation of memantine ER for other clinical applications.
Findling, Robert L.; Hardan, Antonio Y.; Hendren, Robert L.; Melmed, Raun D.; Kehinde-Nelson, Ola; Hsu, Hai-An; Trugman, Joel M.; Palmer, Robert H.; Graham, Stephen M.; Gage, Allyson T.; Perhach, James L.; Katz, Ephraim
2017-01-01
Abstract Objective: Abnormal glutamatergic neurotransmission is implicated in the pathophysiology of autism spectrum disorder (ASD). In this study, the safety, tolerability, and efficacy of the glutamatergic N-methyl-d-aspartate (NMDA) receptor antagonist memantine (once-daily extended-release [ER]) were investigated in children with autism in a randomized, placebo-controlled, 12 week trial and a 48 week open-label extension. Methods: A total of 121 children 6–12 years of age with Diagnostic and Statistical Manual of Mental Disorders, 4th ed., Text Revision (DSM-IV-TR)-defined autistic disorder were randomized (1:1) to placebo or memantine ER for 12 weeks; 104 children entered the subsequent extension trial. Maximum memantine doses were determined by body weight and ranged from 3 to 15 mg/day. Results: There was one serious adverse event (SAE) (affective disorder, with memantine) in the 12 week study and one SAE (lobar pneumonia) in the 48 week extension; both were deemed unrelated to treatment. Other AEs were considered mild or moderate and most were deemed not related to treatment. No clinically significant changes occurred in clinical laboratory values, vital signs, or electrocardiogram (ECG). There was no significant between-group difference on the primary efficacy outcome of caregiver/parent ratings on the Social Responsiveness Scale (SRS), although an improvement over baseline at Week 12 was observed in both groups. A trend for improvement at the end of the 48 week extension was observed. No improvements in the active group were observed on any of the secondary end-points, with one communication measure showing significant worsening with memantine compared with placebo (p = 0.02) after 12 weeks. Conclusions: This trial did not demonstrate clinical efficacy of memantine ER in autism; however, the tolerability and safety data were reassuring. Our results could inform future trial design in this population and may facilitate the investigation of memantine ER for other clinical applications. PMID:26978327
Zoellner, Jamie; Cook, Emily; Chen, Yvonnes; You, Wen; Davy, Brenda; Estabrooks, Paul
2013-02-01
This Excessive sugar-sweetened beverage (SSB) consumption and low health literacy skills have emerged as two public health concerns in the United States (US); however, there is limited research on how to effectively address these issues among adults. As guided by health literacy concepts and the Theory of Planned Behavior (TPB), this randomized controlled pilot trial applied the RE-AIM framework and a mixed methods approach to examine a sugar-sweetened beverage (SSB) intervention (SipSmartER), as compared to a matched-contact control intervention targeting physical activity (MoveMore). Both 5-week interventions included two interactive group sessions and three support telephone calls. Executing a patient-centered developmental process, the primary aim of this paper was to evaluate patient feedback on intervention content and structure. The secondary aim was to understand the potential reach (i.e., proportion enrolled, representativeness) and effectiveness (i.e. health behaviors, theorized mediating variables, quality of life) of SipSmartER. Twenty-five participants were randomized to SipSmartER (n=14) or MoveMore (n=11). Participants' intervention feedback was positive, ranging from 4.2-5.0 on a 5-point scale. Qualitative assessments reavealed several opportunties to improve clarity of learning materials, enhance instructions and communication, and refine research protocols. Although SSB consumption decreased more among the SipSmartER participants (-256.9 ± 622.6 kcals), there were no significant group differences when compared to control participants (-199.7 ± 404.6 kcals). Across both groups, there were significant improvements for SSB attitudes, SSB behavioral intentions, and two media literacy constructs. The value of using a patient-centered approach in the developmental phases of this intervention was apparent, and pilot findings suggest decreased SSB may be achieved through targeted health literacy and TPB strategies. Future efforts are needed to examine the potential public health impact of a large-scale trial to address health literacy and reduce SSB.
Visual texture perception via graph-based semi-supervised learning
NASA Astrophysics Data System (ADS)
Zhang, Qin; Dong, Junyu; Zhong, Guoqiang
2018-04-01
Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Statistical properties of multi-theta polymer chains
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2018-04-01
We study statistical properties of polymer chains with complex structures whose chemical connectivities are expressed by graphs. The multi-theta curve of m subchains with two branch points connected by them is one of the simplest graphs among those graphs having closed paths, i.e. loops. We denoted it by θm , and for m = 2 it is given by a ring. We derive analytically the pair distribution function and the scattering function for the θm -shaped polymer chains consisting of m Gaussian random walks of n steps. Surprisingly, it is shown rigorously that the mean-square radius of gyration for the Gaussian θm -shaped polymer chain does not depend on the number m of subchains if each subchain has the same fixed number of steps. For m = 3 we show the Kratky plot for the theta-shaped polymer chain consisting of hard cylindrical segments by the Monte-Carlo method including reflection at trivalent vertices.
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Comparative study of upper lip frenectomy with the CO2 laser versus the Er, Cr: YSGG laser
Pié-Sánchez, Jordi; España-Tost, Antonio J.; Arnabat-Domínguez, Josep
2012-01-01
Objectives: To compare upper lip frenulum reinsertion, bleeding, surgical time and surgical wound healing in frenectomies performed with the CO2 laser versus the Er, Cr:YSGG laser. Study design: A prospective study was carried out on 50 randomized pediatric patients who underwent rhomboidal resection of the upper lip frenulum with either the CO2 laser or the Er,Cr:YSGG laser. Twenty-five patients were assigned to each laser system. All patients were examined at 7, 14, 21 days and 4 months after the operation in order to assess the surgical wound healing. Results: Insertion of the frenulum, which was preoperatively located between the upper central incisors, migrated to the mucogingival junction as a result of using both laser systems in all patients. Only two patients required a single dose of 650 mg of paracetamol, one of either study group. CO2 laser registered improved intraoperative bleeding control results and shorter surgical times. On the other hand, the Er,Cr:YSGG laser achieved faster healing. Conclusions: Upper lip laser frenectomy is a simple technique that results in minimum or no postoperative swelling or pain, and which involves upper lip frenulum reinsertion at the mucogingival junction. The CO2 laser offers a bloodless field and shorter surgical times compared with the Er,Cr:YSGG laser. On the other hand, the Er,Cr:YSGG laser achieved faster wound healing. Key words:Frenectomy, upper lip frenulum, CO2 laser, Er,Cr:YSGG laser, laser. PMID:22143683
Endoplasmic Reticulum Chaperon Tauroursodeoxycholic Acid Attenuates Aldosterone-Infused Renal Injury
Guo, Honglei; Li, Hongmei; Ling, Lilu
2016-01-01
Aldosterone (Aldo) is critically involved in the development of renal injury via the production of reactive oxygen species and inflammation. Endoplasmic reticulum (ER) stress is also evoked in Aldo-induced renal injury. In the present study, we investigated the role of ER stress in inflammation-mediated renal injury in Aldo-infused mice. C57BL/6J mice were randomized to receive treatment for 4 weeks as follows: vehicle infusion, Aldo infusion, vehicle infusion plus tauroursodeoxycholic acid (TUDCA), and Aldo infusion plus TUDCA. The effect of TUDCA on the Aldo-infused inflammatory response and renal injury was investigated using periodic acid-Schiff staining, real-time PCR, Western blot, and ELISA. We demonstrate that Aldo leads to impaired renal function and inhibition of ER stress via TUDCA attenuates renal fibrosis. This was indicated by decreased collagen I, collagen IV, fibronectin, and TGF-β expression, as well as the downregulation of the expression of Nlrp3 inflammasome markers, Nlrp3, ASC, IL-1β, and IL-18. This paper presents an important role for ER stress on the renal inflammatory response to Aldo. Additionally, the inhibition of ER stress by TUDCA negatively regulates the levels of these inflammatory molecules in the context of Aldo. PMID:27721575
Successful treatment of acne keloidalis nuchae with erbium:YAG laser: a comparative study.
Gamil, Hend D; Khater, Elsayed M; Khattab, Fathia M; Khalil, Mona A
2018-05-14
Acne keloidalis nuchae (AKN) is a chronic inflammatory disease involving hair follicles of the neck. It is a form of keloidal scarring alopecia that is often refractory to medical or surgical management. To evaluate the efficacy of Er:YAG laser in the treatment of AKN as compared to long pulsed Nd:YAG laser. This study was conducted on 30 male patients with AKN. Their ages ranged from 19 to 47 years with a mean age of 36.87 ± 7.8 years. Patients were divided randomly into two groups of 15 patients, each receiving six sessions of either Er:YAG or long-pulsed Nd:YAG laser therapy. A statistically significant decrease in the number of papules was detected at the end of therapy in both groups, with a mean of 91.8% improvement in the Er:YAG group versus 88% in the Nd:YAG group. A significant decrease in plaques count was detected only in the Er: YAG group while a significant decrease in plaques size and consistency was recorded in both groups. The Er: YAG laser proved to be a potentially effective and safe modality both in the early and late AKN lesions.
Pagani, Olivia; Gelber, Shari; Simoncini, Edda; Castiglione-Gertsch, Monica; Price, Karen N; Gelber, Richard D; Holmberg, Stig B; Crivellari, Diana; Collins, John; Lindtner, Jurij; Thürlimann, Beat; Fey, Martin F; Murray, Elizabeth; Forbes, John F; Coates, Alan S; Goldhirsch, Aron
2009-08-01
To compare the efficacy of chemoendocrine treatment with that of endocrine treatment (ET) alone for postmenopausal women with highly endocrine responsive breast cancer. In the International Breast Cancer Study Group (IBCSG) Trials VII and 12-93, postmenopausal women with node-positive, estrogen receptor (ER)-positive or ER-negative, operable breast cancer were randomized to receive either chemotherapy or endocrine therapy or combined chemoendocrine treatment. Results were analyzed overall in the cohort of 893 patients with endocrine-responsive disease, and according to prospectively defined categories of ER, age and nodal status. STEPP analyses assessed chemotherapy effect. The median follow-up was 13 years. Adding chemotherapy reduced the relative risk of a disease-free survival event by 19% (P = 0.02) compared with ET alone. STEPP analyses showed little effect of chemotherapy for tumors with high levels of ER expression (P = 0.07), or for the cohort with one positive node (P = 0.03). Chemotherapy significantly improves disease-free survival for postmenopausal women with endocrine-responsive breast cancer, but the magnitude of the effect is substantially attenuated if ER levels are high.
Computational Models for Belief Revision, Group Decision-Making and Cultural Shifts
2010-10-25
34social" networks; the green numbers are pseudo-trees or artificial (non-social) constructions. The dashed blue line indicates the range of Erdos- Renyi ...non-social networks such as Erdos- Renyi random graphs or the more passive non-cognitive spreading of disease or information flow, As mentioned
Beat the Instructor: An Introductory Forecasting Game
ERIC Educational Resources Information Center
Snider, Brent R.; Eliasson, Janice B.
2013-01-01
This teaching brief describes a 30-minute game where student groups compete in-class in an introductory time-series forecasting exercise. The students are challenged to "beat the instructor" who competes using forecasting techniques that will be subsequently taught. All forecasts are graphed prior to revealing the randomly generated…
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P.; Gerstein, Mark
2010-01-01
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems. PMID:20439753
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark
2010-05-18
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.
Structure of a randomly grown 2-d network.
Ajazi, Fioralba; Napolitano, George M; Turova, Tatyana; Zaurbek, Izbassar
2015-10-01
We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes in time different phases of the structure. We conclude with a possible explanation of some empirical data on the connections between neurons. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers
NASA Astrophysics Data System (ADS)
Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos
2017-01-01
We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.
Lee, Jae Hyup; Lee, Chong-Suh
2013-11-01
Chronic low back pain is a common condition that is often difficult to treat. The combination of tramadol hydrochloride and acetaminophen in an extended-release formulation has been shown to provide rapid and long-lasting analgesic effects resulting from the synergistic activity of these 2 active ingredients. The goal of this study was to evaluate the efficacy and safety of extended-release tramadol hydrochloride 75-mg/acetaminophen 650-mg fixed-dose combination tablets (TA-ER) for the treatment of chronic low back pain. This Phase III, double-blind, placebo-controlled, parallel-group study enrolled 245 patients with moderate to severe (≥4 cm on a 10-cm visual analog scale) chronic (≥3 months') low back pain insufficiently controlled by previous NSAIDs or cyclooxygenase-2-selective inhibitors and randomly assigned them to receive 4 weeks of either TA-ER or placebo. The primary efficacy end point was the percentage of patients with a pain intensity change rate ≥30% from baseline to final evaluation. Secondary end points included quality of life (Korean Short Form-36), functionality (Korean Oswestry Disability Index), and adverse events. The percentage of patients with a pain intensity change rate ≥30% was significantly higher (P < 0.05) in the TA-ER group than in the placebo group for both the full analysis set and the per-protocol population. Pain relief success rate from baseline was significantly higher with TA-ER versus placebo at days 8 and 15 but not at the final visit. Patients in the TA-ER group had significant improvements versus placebo in role-physical, general health, and reported health transition domains of the Korean Short Form-36 and significantly higher functional improvements in the personal care section of the Korean Oswestry Disability Index. Patient assessment of overall pain control as "very good" was also significantly higher with TA-ER than with placebo. Adverse events were reported more frequently with TA-ER than with placebo; the most common adverse events reported were nausea, dizziness, constipation, and vomiting. TA-ER was significantly more effective than placebo in providing pain relief, functional improvements, and improved quality of life. It exhibited a predictable safety profile in patients with chronic low back pain. ClinicalTrials.gov identifier: NCT01112267. © 2013 The Authors. Published by Elsevier HS Journals, Inc. All rights reserved.
Cheng, Xiaogang; Guan, Sumin; Lu, Hong; Zhao, Chunmiao; Chen, Xingxing; Li, Na; Bai, Qian; Tian, Yu; Yu, Qing
2012-12-01
In recent years, various laser systems have been introduced into the field of laser-assisted endodontic therapy. The aim of this study was to evaluate the bactericidal effect of Nd:YAG, Er:YAG, Er,Cr:YSGG laser radiation, and antimicrobial photodynamic therapy (aPDT) in experimentally infected root canals compared with standard endodontic treatment of 5.25% sodium hypochlorite (NaClO) irrigation. Two hundred and twenty infected root canals from extracted human teeth (contaminated with Enterococcus faecalis ATCC 4083 for 4 weeks) were randomly divided into five experimental groups (Nd:YAG, Er:YAG + 5.25% NaClO + 0.9% normal saline + distilled water (Er:YAG/NaClO/NS/DW), Er:YAG + 0.9% normal saline + distilled water (Er:YAG/NS/DW), Er,Cr:YSGG, and aPDT) and two control groups (5.25% NaClO as positive control and 0.9% normal saline (NS) as negative control). The numbers of bacteria on the surface of root canal walls and at different depths inside dentinal tubules before and after treatment were analyzed by means of one-way analysis of variance (one-way ANOVA). The morphology of bacterial cells before and after treatment was examined by scanning electron microscopy (SEM). After treatment, the bacterial reductions in the experimental groups and the positive control group were significantly greater than that of the negative control group (P < 0.001). However, only Er:YAG/NaClO/NS/DW group showed no bacterial growth (the bacterial reduction reached up to 100%) on the surface of root canal walls or at 100/200 µm inside the dentinal tubules. All the laser radiation protocols tested, especially Er:YAG/NaClO/NS/DW, have effective bactericidal effect in experimentally infected root canals. Er:YAG/NaClO/NS/DW seems to be an ideal protocol for root canal disinfection during endodontic therapy. Copyright © 2012 Wiley Periodicals, Inc.
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
Rexhepaj, Elton; Brennan, Donal J; Holloway, Peter; Kay, Elaine W; McCann, Amanda H; Landberg, Goran; Duffy, Michael J; Jirstrom, Karin; Gallagher, William M
2008-01-01
Manual interpretation of immunohistochemistry (IHC) is a subjective, time-consuming and variable process, with an inherent intra-observer and inter-observer variability. Automated image analysis approaches offer the possibility of developing rapid, uniform indicators of IHC staining. In the present article we describe the development of a novel approach for automatically quantifying oestrogen receptor (ER) and progesterone receptor (PR) protein expression assessed by IHC in primary breast cancer. Two cohorts of breast cancer patients (n = 743) were used in the study. Digital images of breast cancer tissue microarrays were captured using the Aperio ScanScope XT slide scanner (Aperio Technologies, Vista, CA, USA). Image analysis algorithms were developed using MatLab 7 (MathWorks, Apple Hill Drive, MA, USA). A fully automated nuclear algorithm was developed to discriminate tumour from normal tissue and to quantify ER and PR expression in both cohorts. Random forest clustering was employed to identify optimum thresholds for survival analysis. The accuracy of the nuclear algorithm was initially confirmed by a histopathologist, who validated the output in 18 representative images. In these 18 samples, an excellent correlation was evident between the results obtained by manual and automated analysis (Spearman's rho = 0.9, P < 0.001). Optimum thresholds for survival analysis were identified using random forest clustering. This revealed 7% positive tumour cells as the optimum threshold for the ER and 5% positive tumour cells for the PR. Moreover, a 7% cutoff level for the ER predicted a better response to tamoxifen than the currently used 10% threshold. Finally, linear regression was employed to demonstrate a more homogeneous pattern of expression for the ER (R = 0.860) than for the PR (R = 0.681). In summary, we present data on the automated quantification of the ER and the PR in 743 primary breast tumours using a novel unsupervised image analysis algorithm. This novel approach provides a useful tool for the quantification of biomarkers on tissue specimens, as well as for objective identification of appropriate cutoff thresholds for biomarker positivity. It also offers the potential to identify proteins with a homogeneous pattern of expression.
Meng, Lan; Li, Shu-Qin; Ji, Nan; Luo, Fang
2015-05-20
The optimal ventilated status under total intravenous or inhalation anesthesia in neurosurgical patients with a supratentorial tumor has not been ascertained. The purpose of this study was to intraoperatively compare the effects of moderate hyperventilation on the jugular bulb oxygen saturation (SjO 2 ), cerebral oxygen extraction ratio (O 2 ER), mean arterial blood pressure (MAP), and heart rate (HR) in patients with a supratentorial tumor under different anesthetic regimens. Twenty adult patients suffered from supratentorial tumors were randomly assigned to receive a propofol infusion followed by isoflurane anesthesia after a 30-min stabilization period or isoflurane followed by propofol. The patients were randomized to one of the following two treatment sequences: hyperventilation followed by normoventilation or normoventilation followed by hyperventilation during isoflurane or propofol anesthesia, respectively. The ventilation and end-tidal CO 2 tension were maintained at a constant level for 20 min. Radial arterial and jugular bulb catheters were inserted for the blood gas sampling. At the end of each study period, we measured the change in the arterial and jugular bulb blood gases. The mean value of the jugular bulb oxygen saturation (SjO 2 ) significantly decreased, and the oxygen extraction ratio (O 2 ER) significantly increased under isoflurane or propofol anesthesia during hyperventilation compared with those during normoventilation (SjO 2 : t = -2.728, P = 0.011 or t = -3.504, P = 0.001; O 2 ER: t = 2.484, P = 0.020 or t = 2.892, P = 0.009). The SjO 2 significantly decreased, and the O 2 ER significantly increased under propofol anesthesia compared with those values under isoflurane anesthesia during moderate hyperventilation (SjO 2 : t = -2.769, P = 0.012; O 2 ER: t = 2.719, P = 0.013). In the study, no significant changes in the SjO 2 and the O 2 ER were observed under propofol compared with those values under isoflurane during normoventilation. Our results suggest that the optimal ventilated status under propofol or isoflurane anesthesia in neurosurgical patients varies. Hyperventilation under propofol anesthesia should be cautiously performed in neurosurgery to maintain an improved balance between the cerebral oxygen supply and demand.
Tait, Alan R; Voepel-Lewis, Terri; Brennan-Martinez, Colleen; McGonegal, Maureen; Levine, Robert
2012-11-01
Conventional print materials for presenting risks and benefits of treatment are often difficult to understand. This study was undertaken to evaluate and compare subjects' understanding and perceptions of risks and benefits presented using animated computerized text and graphics. Adult subjects were randomized to receive identical risk/benefit information regarding taking statins that was presented on an iPad (Apple Corp, Cupertino, Calif) in 1 of 4 different animated formats: text/numbers, pie chart, bar graph, and pictograph. Subjects completed a questionnaire regarding their preferences and perceptions of the message delivery together with their understanding of the information. Health literacy, numeracy, and need for cognition were measured using validated instruments. There were no differences in subject understanding based on the different formats. However, significantly more subjects preferred graphs (82.5%) compared with text (17.5%, P<.001). Specifically, subjects preferred pictographs (32.0%) and bar graphs (31.0%) over pie charts (19.5%) and text (17.5%). Subjects whose preference for message delivery matched their randomly assigned format (preference match) had significantly greater understanding and satisfaction compared with those assigned to something other than their preference. Results showed that computer-animated depictions of risks and benefits offer an effective means to describe medical risk/benefit statistics. That understanding and satisfaction were significantly better when the format matched the individual's preference for message delivery is important and reinforces the value of "tailoring" information to the individual's needs and preferences. Copyright © 2012 Elsevier Inc. All rights reserved.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Jovanović, Stojan; Rotter, Stefan
2016-06-01
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Min, Byungjoon; Bo, Lin; Mari, Romain; Makse, Hernán A.
2016-07-01
We elaborate on a linear-time implementation of Collective-Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in networks via optimal percolation. The computational complexity of CI is O(N log N) when removing nodes one-by-one, made possible through an appropriate data structure to process CI. We introduce two Belief-Propagation (BP) variants of CI that consider global optimization via message-passing: CI propagation (CIP) and Collective-Immunization-Belief-Propagation algorithm (CIBP) based on optimal immunization. Both identify a slightly smaller fraction of influencers than CI and, remarkably, reproduce the exact analytical optimal percolation threshold obtained in Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little room for improvement for random graphs. However, the small augmented performance comes at the expense of increasing running time to O(N2), rendering BP prohibitive for modern-day big-data. For instance, for big-data social networks of 200 million users (e.g., Twitter users sending 500 million tweets/day), CI finds influencers in 2.5 hours on a single CPU, while all BP algorithms (CIP, CIBP and BDP) would take more than 3,000 years to accomplish the same task.
Morone, Flaviano; Min, Byungjoon; Bo, Lin; Mari, Romain; Makse, Hernán A
2016-07-26
We elaborate on a linear-time implementation of Collective-Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in networks via optimal percolation. The computational complexity of CI is O(N log N) when removing nodes one-by-one, made possible through an appropriate data structure to process CI. We introduce two Belief-Propagation (BP) variants of CI that consider global optimization via message-passing: CI propagation (CIP) and Collective-Immunization-Belief-Propagation algorithm (CIBP) based on optimal immunization. Both identify a slightly smaller fraction of influencers than CI and, remarkably, reproduce the exact analytical optimal percolation threshold obtained in Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little room for improvement for random graphs. However, the small augmented performance comes at the expense of increasing running time to O(N(2)), rendering BP prohibitive for modern-day big-data. For instance, for big-data social networks of 200 million users (e.g., Twitter users sending 500 million tweets/day), CI finds influencers in 2.5 hours on a single CPU, while all BP algorithms (CIP, CIBP and BDP) would take more than 3,000 years to accomplish the same task.
Morone, Flaviano; Min, Byungjoon; Bo, Lin; Mari, Romain; Makse, Hernán A.
2016-01-01
We elaborate on a linear-time implementation of Collective-Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in networks via optimal percolation. The computational complexity of CI is O(N log N) when removing nodes one-by-one, made possible through an appropriate data structure to process CI. We introduce two Belief-Propagation (BP) variants of CI that consider global optimization via message-passing: CI propagation (CIP) and Collective-Immunization-Belief-Propagation algorithm (CIBP) based on optimal immunization. Both identify a slightly smaller fraction of influencers than CI and, remarkably, reproduce the exact analytical optimal percolation threshold obtained in Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little room for improvement for random graphs. However, the small augmented performance comes at the expense of increasing running time to O(N2), rendering BP prohibitive for modern-day big-data. For instance, for big-data social networks of 200 million users (e.g., Twitter users sending 500 million tweets/day), CI finds influencers in 2.5 hours on a single CPU, while all BP algorithms (CIP, CIBP and BDP) would take more than 3,000 years to accomplish the same task. PMID:27455878
GraphCrunch 2: Software tool for network modeling, alignment and clustering.
Kuchaiev, Oleksii; Stevanović, Aleksandar; Hayes, Wayne; Pržulj, Nataša
2011-01-19
Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, GraphCruch 2 implements an algorithm for clustering nodes within a network based solely on their topological similarities. Using GraphCrunch 2, we demonstrate that eukaryotic and viral PPI networks may belong to different graph model families and show that topology-based clustering can reveal important functional similarities between proteins within yeast and human PPI networks. GraphCrunch 2 is a software tool that implements the latest research on biological network analysis. It parallelizes computationally intensive tasks to fully utilize the potential of modern multi-core CPUs. It is open-source and freely available for research use. It runs under the Windows and Linux platforms.
A graph-theory framework for evaluating landscape connectivity and conservation planning.
Minor, Emily S; Urban, Dean L
2008-04-01
Connectivity of habitat patches is thought to be important for movement of genes, individuals, populations, and species over multiple temporal and spatial scales. We used graph theory to characterize multiple aspects of landscape connectivity in a habitat network in the North Carolina Piedmont (U.S.A). We compared this landscape with simulated networks with known topology, resistance to disturbance, and rate of movement. We introduced graph measures such as compartmentalization and clustering, which can be used to identify locations on the landscape that may be especially resilient to human development or areas that may be most suitable for conservation. Our analyses indicated that for songbirds the Piedmont habitat network was well connected. Furthermore, the habitat network had commonalities with planar networks, which exhibit slow movement, and scale-free networks, which are resistant to random disturbances. These results suggest that connectivity in the habitat network was high enough to prevent the negative consequences of isolation but not so high as to allow rapid spread of disease. Our graph-theory framework provided insight into regional and emergent global network properties in an intuitive and visual way and allowed us to make inferences about rates and paths of species movements and vulnerability to disturbance. This approach can be applied easily to assessing habitat connectivity in any fragmented or patchy landscape.
Entropy, complexity, and Markov diagrams for random walk cancer models
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-01-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357
Evaluation of the MyWellness Key accelerometer.
Herrmann, S D; Hart, T L; Lee, C D; Ainsworth, B E
2011-02-01
to examine the concurrent validity of the Technogym MyWellness Key accelerometer against objective and subjective physical activity (PA) measures. randomised, cross-sectional design with two phases. The laboratory phase compared the MyWellness Key with the ActiGraph GT1M and the Yamax SW200 Digiwalker pedometer during graded treadmill walking, increasing speed each minute. The free-living phase compared the MyWellness Key with the ActiGraph, Digiwalker, Bouchard Activity cord (BAR) and Global Physical Activity Questionnaire (GPAQ) for seven continuous days. Data were analysed using Spearman rank-order correlation coefficients for all comparisons. laboratory and free-living phases. sixteen participants randomly stratified from 41 eligible respondents by sex (n=8 men; n=8 women) and PA levels (n=4 low, n=8 middle and n=4 high active). there was a strong association between the MyWellness Key and the ActiGraph accelerometer during controlled graded treadmill walking (r=0.91, p<0.01) and in free-living settings (r=0.73-0.76 for light to vigorous PA, respectively, p<0.01). No associations were observed between the MyWellness Key and the BAR and GPAQ (p>0.05). the MyWellness Key has a high concurrent validity with the ActiGraph accelerometer to detect PA in both controlled laboratory and free-living settings.
Scale free effects in world currency exchange network
NASA Astrophysics Data System (ADS)
Górski, A. Z.; Drożdż, S.; Kwapień, J.
2008-11-01
A large collection of daily time series for 60 world currencies' exchange rates is considered. The correlation matrices are calculated and the corresponding Minimal Spanning Tree (MST) graphs are constructed for each of those currencies used as reference for the remaining ones. It is shown that multiplicity of the MST graphs' nodes to a good approximation develops a power like, scale free distribution with the scaling exponent similar as for several other complex systems studied so far. Furthermore, quantitative arguments in favor of the hierarchical organization of the world currency exchange network are provided by relating the structure of the above MST graphs and their scaling exponents to those that are derived from an exactly solvable hierarchical network model. A special status of the USD during the period considered can be attributed to some departures of the MST features, when this currency (or some other tied to it) is used as reference, from characteristics typical to such a hierarchical clustering of nodes towards those that correspond to the random graphs. Even though in general the basic structure of the MST is robust with respect to changing the reference currency some trace of a systematic transition from somewhat dispersed - like the USD case - towards more compact MST topology can be observed when correlations increase.
Improvement of Automated POST Case Success Rate Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Zwack, Matthew R.; Dees, Patrick D.
2017-01-01
During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.
Verchere, Cynthia; Durlacher, Kim; Bellows, Doria; Pike, Jeffrey; Bucevska, Marija
2014-06-01
Birth-related brachial plexus injury (BRBPI) occurs in 1.2/1,000 births in British Columbia. Even in children with "good" recovery, external rotation (ER) and supination (Sup) are often weaker, and permanent skeletal imbalance ensues. A preventive early infant shoulder passive repositioning program was created using primarily a novel custom splint holding the affected arm in full ER and Sup: the Sup-ER splint. The details of the splint and the shoulder repositioning program evolved with experience over several years. This study reviews the first 4 years. A retrospective review of BCCH patients managed with the Sup-ER protocol from 2008 to 2011 compared their recovery scores to matched historical controls selected from our database by two independent reviewers. The protocol was initiated in 18 children during the study period. Six were excluded due to the following: insufficient data points, non-compliance, late splint initiation, and loss to follow-up. Of the 12 matches, the Sup-ER group final score at 2 years was better than controls by 1.18 active movement scale (AMS) points (p = 0.036) in Sup and 0.96 AMS points in ER (but not statistically significant (p = 0.13)). Unexpectedly, but importantly, during the study period, zero subjects were assessed to have the active functional criteria to indicate brachial plexus reconstruction, where previously we operated on 13 %. Early application of passive shoulder repositioning into Sup and ER may improve outcomes in function of the arm in infants with BRBPI. A North American multi-site randomized control trial has been approved and has started recruitment.
Jung, Seungyoun; Wang, Molin; Anderson, Kristin; Baglietto, Laura; Bergkvist, Leif; Bernstein, Leslie; van den Brandt, Piet A; Brinton, Louise; Buring, Julie E; Heather Eliassen, A; Falk, Roni; Gapstur, Susan M; Giles, Graham G; Goodman, Gary; Hoffman-Bolton, Judith; Horn-Ross, Pamela L; Inoue, Manami; Kolonel, Laurence N; Krogh, Vittorio; Lof, Marie; Maas, Paige; Miller, Anthony B; Neuhouser, Marian L; Park, Yikyung; Robien, Kim; Rohan, Thomas E; Scarmo, Stephanie; Schouten, Leo J; Sieri, Sabina; Stevens, Victoria L; Tsugane, Schoichiro; Visvanathan, Kala; Wilkens, Lynne R; Wolk, Alicja; Weiderpass, Elisabete; Willett, Walter C; Zeleniuch-Jacquotte, Anne; Zhang, Shumin M; Zhang, Xuehong; Ziegler, Regina G; Smith-Warner, Stephanie A
2016-01-01
Background: Breast cancer aetiology may differ by estrogen receptor (ER) status. Associations of alcohol and folate intakes with risk of breast cancer defined by ER status were examined in pooled analyses of the primary data from 20 cohorts. Methods: During a maximum of 6–18 years of follow-up of 1 089 273 women, 21 624 ER+ and 5113 ER− breast cancers were identified. Study-specific multivariable relative risks (RRs) were calculated using Cox proportional hazards regression models and then combined using a random-effects model. Results: Alcohol consumption was positively associated with risk of ER+ and ER− breast cancer. The pooled multivariable RRs (95% confidence intervals) comparing ≥ 30 g/d with 0 g/day of alcohol consumption were 1.35 (1.23-1.48) for ER+ and 1.28 (1.10-1.49) for ER− breast cancer (Ptrend ≤ 0.001; Pcommon-effects by ER status: 0.57). Associations were similar for alcohol intake from beer, wine and liquor. The associations with alcohol intake did not vary significantly by total (from foods and supplements) folate intake (Pinteraction ≥ 0.26). Dietary (from foods only) and total folate intakes were not associated with risk of overall, ER+ and ER− breast cancer; pooled multivariable RRs ranged from 0.98 to 1.02 comparing extreme quintiles. Following-up US studies through only the period before mandatory folic acid fortification did not change the results. The alcohol and folate associations did not vary by tumour subtypes defined by progesterone receptor status. Conclusions: Alcohol consumption was positively associated with risk of both ER+ and ER− breast cancer, even among women with high folate intake. Folate intake was not associated with breast cancer risk. PMID:26320033
Toothbrush abrasion, simulated tongue friction and attrition of eroded bovine enamel in vitro.
Vieira, A; Overweg, E; Ruben, J L; Huysmans, M C D N J M
2006-05-01
Enamel erosion results in the formation of a softened layer that is susceptible to disruption by mechanical factors such as brushing abrasion, tongue friction and attrition. The aim of this study was to investigate the individual contribution of those mechanical insults to the enamel loss caused by dental erosion. Forty two bovine enamel samples were randomly divided into seven groups (n=6 per group) that were submitted to 3cycles of one of the following regimes: erosion and remineralization (er/remin); toothbrush abrasion and remineralization (abr/remin); erosion, toothbrush abrasion and remineralization (er/abr/remin); attrition and remineralization (at/remin); erosion, attrition and remineralization (er/at/remin); simulated tongue friction and remineralization (tg/remin); erosion, simulated tongue friction and remineralization (er/tg/remin). Erosion took place in a demineralization solution (50mM citric acid, pH 3) for 10min under agitation. Brushing abrasion, tongue friction and attrition were simulated for 1min using a home-made wear device. Remineralization was carried out in artificial saliva for 2h. Enamel loss was quantified using optical profilometry. One-way ANOVA indicated a significant difference between the amounts of enamel lost due to the different wear regimes (p
Basir, Mahshid Mohammadi; Rezvani, Mohammad Bagher; Chiniforush, Nasim; Moradi, Zohreh
2016-01-01
Tooth restoration immediately after bleaching is challenging due to the potential problems in achieving adequate bond strength. The aim of this study was to evaluate the effect of surface treatment with ER:YAG, ND:YAG, CO2 lasers and 10% sodium ascorbate solution on immediate microtensile bond strength of composite resin to recently bleached enamel. Ninety sound molar teeth were randomly divided into three main groups (n:30) : NB (without bleaching), HB (bleached with 38% carbamide peroxide) and OB (bleached with Heydent bleaching gel assisted by diode laser). Each group was divided into five subgroups (n:6) : Si (without surface treatment), Er (Er:YAG laser), CO2 (CO2 laser), Nd (Nd:YAG laser) and As (Immersion in 10% sodium ascorbate solution). The bonding system was then applied and composite build-ups were constructed. The teeth were sectioned by low speed saw to obtain enamel- resin sticks and submitted to microtensile bond testing. Statistical analyses were done using two- way ANOVA, Tukey and Tamhane tests. µTBS of bleached teeth irradiated with ND:YAG laser was not significantly different from NB-Nd group. Microtensile bond strength of OB-Er group was higher than NB-Er and HB-Er groups. The mean µTBS of HB-CO2 group was higher than NB-CO2 group; the average µTBS of HB-As and OB-As groups was also higher than NB-As group. Use of Nd:YAG, CO2 lasers and 10% sodium ascorbate solution could improve the bond strength in home-bleached specimens. Application of ND:YAG laser on nonbleached specimens and Er:YAG laser on office-bleached specimens led to the highest µTBS in comparison to other surface treatments in each main group.
The Impacts of Air Temperature on Accidental Casualties in Beijing, China.
Ma, Pan; Wang, Shigong; Fan, Xingang; Li, Tanshi
2016-11-02
Emergency room (ER) visits for accidental casualties, according to the International Classification of Deceases 10th Revision Chapters 19 and 20, include injury, poisoning, and external causes (IPEC). Annual distribution of 187,008 ER visits that took place between 2009 and 2011 in Beijing, China displayed regularity rather than random characteristics. The annual cycle from the Fourier series fitting of the number of ER visits was found to explain 63.2% of its total variance. In this study, the possible effect and regulation of meteorological conditions on these ER visits are investigated through the use of correlation analysis, as well as statistical modeling by using the Distributed Lag Non-linear Model and Generalized Additive Model. Correlation analysis indicated that meteorological variables that positively correlated with temperature have a positive relationship with the number of ER visits, and vice versa. The temperature metrics of maximum, minimum, and mean temperatures were found to have similar overall impacts, including both the direct impact on human mental/physical conditions and indirect impact on human behavior. The lag analysis indicated that the overall impacts of temperatures higher than the 50th percentile on ER visits occur immediately, whereas low temperatures show protective effects in the first few days. Accidental casualties happen more frequently on warm days when the mean temperature is higher than 14 °C than on cold days. Mean temperatures of around 26 °C result in the greatest possibility of ER visits for accidental casualties. In addition, males were found to face a higher risk of accidental casualties than females at high temperatures. Therefore, the IPEC-classified ER visits are not pure accidents; instead, they are associated closely with meteorological conditions, especially temperature.
Properties of solar ephemeral regions at the emergence stage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shuhong; Zhang, Jun, E-mail: shuhongyang@nao.cas.cn, E-mail: zjun@nao.cas.cn
2014-01-20
For the first time, we statistically study the properties of ephemeral regions (ERs) and quantitatively determine their parameters at the emergence stage based on a sample of 2988 ERs observed by the Solar Dynamics Observatory. During the emergence process, there are three kinds of kinematic performances, i.e., separation of dipolar patches, shift of the ER's magnetic centroid, and rotation of the ER's axis. The average emergence duration, flux emergence rate, separation velocity, shift velocity, and angular speed are 49.3 minutes, 2.6 × 10{sup 15} Mx s{sup –1}, 1.1 km s{sup –1}, 0.9 km s{sup –1}, and 0.°6 minute{sup –1}, respectively.more » At the end of emergence, the mean magnetic flux, separation distance, shift distance, and rotation angle are 9.3 × 10{sup 18} Mx, 4.7 Mm, 1.1 Mm, and 12.°9, respectively. We also find that the higher the ER magnetic flux is, (1) the longer the emergence lasts, (2) the higher the flux emergence rate is, (3) the further the two polarities separate, (4) the lower the separation velocity is, (5) the larger the shift distance is, (6) the slower the ER shifts, and (7) the lower the rotation speed is. However, the rotation angle seems not to depend on the magnetic flux. Not only at the start time, but also at the end time, the ERs are randomly oriented in both the northern and the southern hemispheres. Finally, neither the anti-clockwise-rotated ERs nor the clockwise rotated ones dominate the northern or the southern hemisphere.« less
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
Gao, Yu; Gui, Qinfang; Jin, Li; Yu, Pan; Wu, Lin; Cao, Liangbin; Wang, Qiang; Duan, Manlin
2017-02-15
Hydrogen-rich saline can selectively scavenge reactive oxygen species (ROS) and protect brain against ischemia reperfusion (I/R) injury. Endoplasmic reticulum stress (ERS) has been implicated in the pathological process of cerebral ischemia. However, very little is known about the role of hydrogen-rich saline in mediating pathophysiological reactions to ERS after I/R injury caused by cardiac arrest. The rats were randomly divided into three groups, sham group (n=30), ischemia/reperfusion group (n=40) and hydrogen-rich saline group (n=40). The rats in experimental groups were subjected to 4min of cardiac arrest and followed by resuscitation. Then they were randomized to receive 5ml/kg of either hydrogen-rich saline or normal saline. Hydrogen-rich saline significantly improves survival rate and neurological function. The beneficial effects of hydrogen-rich saline were associated with decreased levels of oxidative products, as well as the increased levels of antioxidant enzymes. Furthermore, the protective effects of hydrogen-rich saline were accompanied by the increased activity of glucose-regulated protein 78 (GRP78), the decreased activity of cysteinyl aspartate specific proteinase-12 (caspase-12) and C/EBP homologous protein (CHOP). Hydrogen-rich saline attenuates brain I/R injury may through inhibiting hippocampus ERS after cardiac arrest in rats. Copyright © 2017 Elsevier B.V. All rights reserved.
Fatigue crack growth model RANDOM2 user manual, appendix 1
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
The FORTRAN program RANDOM2 is documented. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Included in this user manual are details regarding the theoretical background of RANDOM2, input data, instructions and a sample problem illustrating the use of RANDOM2. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B includes photocopies of the actual computer printout corresponding to the sample problem. Appendices C and D detail the IMSL, Ver. 10(1), subroutines and functions called by RANDOM2 and a SAS/GRAPH(2) program that can be used to plot both the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.).
Cavity master equation for the continuous time dynamics of discrete-spin models.
Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Cavity master equation for the continuous time dynamics of discrete-spin models
NASA Astrophysics Data System (ADS)
Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Relaxation dynamics of maximally clustered networks
NASA Astrophysics Data System (ADS)
Klaise, Janis; Johnson, Samuel
2018-01-01
We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.
Gorler, Oguzhan; Hubbezoglu, Ihsan; Ulgey, Melih; Zan, Recai; Guner, Kubra
2018-04-01
The aim of this study was to examine the shear bond strength (SBS) of ceromer and nanohybrid composite to direct laser sintered (DLS) Cr-Co and Ni-Cr-based metal infrastructures treated with erbium-doped yttrium aluminum garnet (Er:YAG), neodymium-doped yttrium aluminum garnet (Nd:YAG), and potassium titanyl phosphate (KTP) laser modalities in in vitro settings. Experimental specimens had four sets (n = 32) including two DLS infrastructures with ceromer and nanohybrid composite superstructures and two Ni-Cr-based infrastructures with ceromer and nanohybrid composite superstructures. Of each infrastructure set, the specimens randomized into four treatment modalities (n = 8): no treatment (controls) and Er:YAG, Nd:YAG, and KTP lasers. The infrastructures were prepared in the final dimensions of 7 × 3 mm. Ceromer and nanohybrid composite was applied to the infrastructures after their surface treatments according to randomization. The SBS of specimens was measured to test the efficacy of surface treatments. Representative scanning electron microscopy (SEM) images after laser treatments were obtained. Overall, in current experimental settings, Nd:YAG, KTP, and Er:YAG lasers, in order of efficacy, are effective to improve the bonding of ceromer and nanohybrid composite to the DLS and Ni-Cr-based infrastructures (p < 0.05). Nd:YAG laser is more effective in the DLS/ceromer infrastructures (p < 0.05). KTP laser, as second more effective preparation, is more effective in the DLS/ceromer infrastructures (p < 0.05). SEM findings presented moderate accordance with these findings. The results of this study supported the bonding of ceromer and nanohybrid composite superstructures to the DLS and Ni-Cr-based infrastructures suggesting that laser modalities, in order of success, Nd:YAG, KTP, and Er:YAG, are effective to increase bonding of these structures.
Anhøj, Jacob
2015-01-01
Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549
Elands, Rachel J J; Simons, Colinda C J M; Dongen, Martien van; Schouten, Leo J; Verhage, Bas A J; van den Brandt, Piet A; Weijenberg, Matty P
2016-01-01
In animal models, long-term moderate energy restriction (ER) is reported to decelerate carcinogenesis, whereas the effect of severe ER is inconsistent. The impact of early-life ER on cancer risk has never been reviewed systematically and quantitatively based on observational studies in humans. We conducted a systematic review of observational studies and a meta-(regression) analysis on cohort studies to clarify the association between early-life ER and organ site-specific cancer risk. PubMed and EMBASE (1982 -August 2015) were searched for observational studies. Summary relative risks (RRs) were estimated using a random effects model when available ≥3 studies. Twenty-four studies were included. Eleven publications, emanating from seven prospective cohort studies and some reporting on multiple cancer endpoints, met the inclusion criteria for quantitative analysis. Women exposed to early-life ER (ranging from 220-1660 kcal/day) had a higher breast cancer risk than those not exposed (RRRE all ages = 1.28, 95% CI: 1.05-1.56; RRRE for 10-20 years of age = 1.21, 95% CI: 1.09-1.34). Men exposed to early-life ER (ranging from 220-800kcal/day) had a higher prostate cancer risk than those not exposed (RRRE = 1.16, 95% CI: 1.03-1.30). Summary relative risks were not computed for colorectal cancer, because of heterogeneity, and for stomach-, pancreas-, ovarian-, and respiratory cancer because there were <3 available studies. Longer duration of exposure to ER, after adjustment for severity, was positively associated with overall cancer risk in women (p = 0.02). Ecological studies suggest that less severe ER is generally associated with a reduced risk of cancer. Early-life transient severe ER seems to be associated with increased cancer risk in the breast (particularly ER exposure at adolescent age) and prostate. The duration, rather than severity of exposure to ER, seems to positively influence relative risk estimates. This result should be interpreted with caution due to the limited number of studies and difficulty in disentangling duration, severity, and geographical setting of exposure.
Elands, Rachel J. J.; Simons, Colinda C. J. M.; van Dongen, Martien; Schouten, Leo J.; Verhage, Bas A. J.; van den Brandt, Piet A.; Weijenberg, Matty P.
2016-01-01
Background In animal models, long-term moderate energy restriction (ER) is reported to decelerate carcinogenesis, whereas the effect of severe ER is inconsistent. The impact of early-life ER on cancer risk has never been reviewed systematically and quantitatively based on observational studies in humans. Objective We conducted a systematic review of observational studies and a meta-(regression) analysis on cohort studies to clarify the association between early-life ER and organ site-specific cancer risk. Methods PubMed and EMBASE (1982 –August 2015) were searched for observational studies. Summary relative risks (RRs) were estimated using a random effects model when available ≥3 studies. Results Twenty-four studies were included. Eleven publications, emanating from seven prospective cohort studies and some reporting on multiple cancer endpoints, met the inclusion criteria for quantitative analysis. Women exposed to early-life ER (ranging from 220–1660 kcal/day) had a higher breast cancer risk than those not exposed (RRRE all ages = 1.28, 95% CI: 1.05–1.56; RRRE for 10–20 years of age = 1.21, 95% CI: 1.09–1.34). Men exposed to early-life ER (ranging from 220–800kcal/day) had a higher prostate cancer risk than those not exposed (RRRE = 1.16, 95% CI: 1.03–1.30). Summary relative risks were not computed for colorectal cancer, because of heterogeneity, and for stomach-, pancreas-, ovarian-, and respiratory cancer because there were <3 available studies. Longer duration of exposure to ER, after adjustment for severity, was positively associated with overall cancer risk in women (p = 0.02). Ecological studies suggest that less severe ER is generally associated with a reduced risk of cancer. Conclusions Early-life transient severe ER seems to be associated with increased cancer risk in the breast (particularly ER exposure at adolescent age) and prostate. The duration, rather than severity of exposure to ER, seems to positively influence relative risk estimates. This result should be interpreted with caution due to the limited number of studies and difficulty in disentangling duration, severity, and geographical setting of exposure. PMID:27643873
Invasion Percolation and Global Optimization
NASA Astrophysics Data System (ADS)
Barabási, Albert-László
1996-05-01
Invasion bond percolation (IBP) is mapped exactly into Prim's algorithm for finding the shortest spanning tree of a weighted random graph. Exploring this mapping, which is valid for arbitrary dimensions and lattices, we introduce a new IBP model that belongs to the same universality class as IBP and generates the minimal energy tree spanning the IBP cluster.
The Effects of Observation Errors on the Attack Vulnerability of Complex Networks
2012-11-01
more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I
ERIC Educational Resources Information Center
Emanouilidis, Emanuel
2005-01-01
Latin squares have existed for hundreds of years but it wasn't until rather recently that Latin squares were used in other areas such as statistics, graph theory, coding theory and the generation of random numbers as well as in the design and analysis of experiments. This note describes Latin and diagonal Latin squares, a method of constructing…
ERIC Educational Resources Information Center
Emanouilidis, Emanuel
2008-01-01
Latin squares were first introduced and studied by the famous mathematician Leonhard Euler in the 1700s. Through the years, Latin squares have been used in areas such as statistics, graph theory, coding theory, the generation of random numbers as well as in the design and analysis of experiments. Recently, with the international popularity of…
[Environmental Education Units.
ERIC Educational Resources Information Center
Minneapolis Independent School District 275, Minn.
Two of these three pamphlets describe methods of teaching young elementary school children the principles of sampling. Tiles of five colors are added to a tub and children sample these randomly; using the tiles as units for a graph, they draw a representation of the population. Pooling results leads to a more reliable sample. Practice is given in…
Collective dynamics of 'small-world' networks.
Watts, D J; Strogatz, S H
1998-06-04
Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Moissinac, Henri; Maitre, Henri; Bloch, Isabelle
1995-11-01
An image interpretation method is presented for the automatic processing of aerial pictures of a urban landscape. In order to improve the picture analysis, some a priori knowledge extracted from a geographic map is introduced. A coherent graph-based model of the city is built, starting with the road network. A global uncertainty management scheme has been designed in order to evaluate the final confidence we can have in the final results. This model and the uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels. The symbolic relationships linking the different kinds of elements are taken into account while propagating and combining the confidence measures along the interpretation process.
Bahrololoomi, Zahra; Poursina, Farkhondeh; Birang, Reza; Foroughi, Elnaz; Yousefshahi, Hazhir
2017-01-01
Introduction: Successful root canal therapy depends on the complete elimination of microorganisms such as Entroccocus faecalis, which is impossible to achieve with the traditional methods. Lasers are recently introduced as a new method to solve the problem. The present study is planned and performed to examining the antibacterial effect of Er: YAG laser. Methods: Sixty extracted anterior primary teeth were prepared and sterilized. E. faecalis bacterium was cultured in canals. Samples were randomly divided into two groups. The first group was disinfected by NaOCl 5/25% and Er: YAG laser and the second group just by NaOCl 5/25%. Samples of canal contents were cultured and colony counts were calculated. The results were analyzed statistically by SPSS software and Mann Whitney test. Results: There was no significant difference between colony counts in both groups (P=0.142). But the number of colonies in the first group was lower than in the second group. Conclusion: Although, Er: YAG laser cannot completely eliminate E. faecalis bacterium, its simultaneous use with NaOCl decreases E. faecalis. PMID:29071021
Bahrololoomi, Zahra; Poursina, Farkhondeh; Birang, Reza; Foroughi, Elnaz; Yousefshahi, Hazhir
2017-01-01
Introduction: Successful root canal therapy depends on the complete elimination of microorganisms such as Entroccocus faecalis , which is impossible to achieve with the traditional methods. Lasers are recently introduced as a new method to solve the problem. The present study is planned and performed to examining the antibacterial effect of Er: YAG laser. Methods: Sixty extracted anterior primary teeth were prepared and sterilized. E. faecalis bacterium was cultured in canals. Samples were randomly divided into two groups. The first group was disinfected by NaOCl 5/25% and Er: YAG laser and the second group just by NaOCl 5/25%. Samples of canal contents were cultured and colony counts were calculated. The results were analyzed statistically by SPSS software and Mann Whitney test. Results: There was no significant difference between colony counts in both groups ( P =0.142). But the number of colonies in the first group was lower than in the second group. Conclusion: Although, Er: YAG laser cannot completely eliminate E. faecalis bacterium, its simultaneous use with NaOCl decreases E. faecalis .
Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sczyrba, Alex; Pratap, Abhishek; Canon, Shane
2011-03-22
Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less
Modeling Passive Propagation of Malwares on the WWW
NASA Astrophysics Data System (ADS)
Chunbo, Liu; Chunfu, Jia
Web-based malwares host in websites fixedly and download onto user's computers automatically while users browse. This passive propagation pattern is different from that of traditional viruses and worms. A propagation model based on reverse web graph is proposed. In this model, propagation of malwares is analyzed by means of random jump matrix which combines orderness and randomness of user browsing behaviors. Explanatory experiments, which has single or multiple propagation sources respectively, prove the validity of the model. Using this model, people can evaluate the hazardness of specified websites and take corresponding countermeasures.
Yardley, Denise A.; Ismail-Khan, Roohi R.; Melichar, Bohuslav; Lichinitser, Mikhail; Munster, Pamela N.; Klein, Pamela M.; Cruickshank, Scott; Miller, Kathy D.; Lee, Min J.; Trepel, Jane B
2013-01-01
Purpose Entinostat is an oral isoform selective histone deacetylase inhibitor that targets resistance to hormonal therapies in estrogen receptor–positive (ER+) breast cancer. This randomized, placebo-controlled, phase II study evaluated entinostat combined with the aromatase inhibitor exemestane versus exemestane alone. Patients and Methods Postmenopausal women with ER+ advanced breast cancer progressing on a nonsteroidal aromatase inhibitor were randomly assigned to exemestane 25 mg daily plus entinostat 5 mg once per week (EE) or exemestane plus placebo (EP). The primary end point was progression-free survival (PFS). Blood was collected in a subset of patients for evaluation of protein lysine acetylation as a biomarker of entinostat activity. Results One hundred thirty patients were randomly assigned (EE group, n = 64; EP group, n = 66). Based on intent-to-treat analysis, treatment with EE improved median PFS to 4.3 months versus 2.3 months with EP (hazard ratio [HR], 0.73; 95% CI, 0.50 to 1.07; one-sided P = .055; two-sided P = .11 [predefined significance level of .10, one-sided]). Median overall survival was an exploratory end point and improved to 28.1 months with EE versus 19.8 months with EP (HR, 0.59; 95% CI, 0.36 to 0.97; P = .036). Fatigue and neutropenia were the most frequent grade 3/4 toxicities. Treatment discontinuation because of adverse events was higher in the EE group versus the EP group (11% v 2%). Protein lysine hyperacetylation in the EE biomarker subset was associated with prolonged PFS. Conclusion Entinostat added to exemestane is generally well tolerated and demonstrated activity in patients with ER+ advanced breast cancer in this signal-finding phase II study. Acetylation changes may provide an opportunity to maximize clinical benefit with entinostat. Plans for a confirmatory study are underway. PMID:23650416
Using environmental heterogeneity to plan for sea-level rise.
Hunter, Elizabeth A; Nibbelink, Nathan P
2017-12-01
Environmental heterogeneity is increasingly being used to select conservation areas that will provide for future biodiversity under a variety of climate scenarios. This approach, termed conserving nature's stage (CNS), assumes environmental features respond to climate change more slowly than biological communities, but will CNS be effective if the stage were to change as rapidly as the climate? We tested the effectiveness of using CNS to select sites in salt marshes for conservation in coastal Georgia (U.S.A.), where environmental features will change rapidly as sea level rises. We calculated species diversity based on distributions of 7 bird species with a variety of niches in Georgia salt marshes. Environmental heterogeneity was assessed across six landscape gradients (e.g., elevation, salinity, and patch area). We used 2 approaches to select sites with high environmental heterogeneity: site complementarity (environmental diversity [ED]) and local environmental heterogeneity (environmental richness [ER]). Sites selected based on ER predicted present-day species diversity better than randomly selected sites (up to an 8.1% improvement), were resilient to areal loss from SLR (1.0% average areal loss by 2050 compared with 0.9% loss of randomly selected sites), and provided habitat to a threatened species (0.63 average occupancy compared with 0.6 average occupancy of randomly selected sites). Sites selected based on ED predicted species diversity no better or worse than random and were not resilient to SLR (2.9% average areal loss by 2050). Despite the discrepancy between the 2 approaches, CNS is a viable strategy for conservation site selection in salt marshes because the ER approach was successful. It has potential for application in other coastal areas where SLR will affect environmental features, but its performance may depend on the magnitude of geological changes caused by SLR. Our results indicate that conservation planners that had heretofore excluded low-lying coasts from CNS planning could include coastal ecosystems in regional conservation strategies. © 2017 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.
2018-06-01
We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.
Prediction of Nucleotide Binding Peptides Using Star Graph Topological Indices.
Liu, Yong; Munteanu, Cristian R; Fernández Blanco, Enrique; Tan, Zhiliang; Santos Del Riego, Antonino; Pazos, Alejandro
2015-11-01
The nucleotide binding proteins are involved in many important cellular processes, such as transmission of genetic information or energy transfer and storage. Therefore, the screening of new peptides for this biological function is an important research topic. The current study proposes a mixed methodology to obtain the first classification model that is able to predict new nucleotide binding peptides, using only the amino acid sequence. Thus, the methodology uses a Star graph molecular descriptor of the peptide sequences and the Machine Learning technique for the best classifier. The best model represents a Random Forest classifier based on two features of the embedded and non-embedded graphs. The performance of the model is excellent, considering similar models in the field, with an Area Under the Receiver Operating Characteristic Curve (AUROC) value of 0.938 and true positive rate (TPR) of 0.886 (test subset). The prediction of new nucleotide binding peptides with this model could be useful for drug target studies in drug development. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantum Graphical Models and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leifer, M.S.; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo Ont., N2L 2Y5; Poulin, D.
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markovmore » Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.« less
NASA Astrophysics Data System (ADS)
Vatutin, Eduard
2017-12-01
The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.
Statistical mechanics of the vertex-cover problem
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2003-10-01
We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
Modelling Chemical Reasoning to Predict and Invent Reactions.
Segler, Marwin H S; Waller, Mark P
2017-05-02
The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Improving graph-based OCT segmentation for severe pathology in retinitis pigmentosa patients
NASA Astrophysics Data System (ADS)
Lang, Andrew; Carass, Aaron; Bittner, Ava K.; Ying, Howard S.; Prince, Jerry L.
2017-03-01
Three dimensional segmentation of macular optical coherence tomography (OCT) data of subjects with retinitis pigmentosa (RP) is a challenging problem due to the disappearance of the photoreceptor layers, which causes algorithms developed for segmentation of healthy data to perform poorly on RP patients. In this work, we present enhancements to a previously developed graph-based OCT segmentation pipeline to enable processing of RP data. The algorithm segments eight retinal layers in RP data by relaxing constraints on the thickness and smoothness of each layer learned from healthy data. Following from prior work, a random forest classifier is first trained on the RP data to estimate boundary probabilities, which are used by a graph search algorithm to find the optimal set of nine surfaces that fit the data. Due to the intensity disparity between normal layers of healthy controls and layers in various stages of degeneration in RP patients, an additional intensity normalization step is introduced. Leave-one-out validation on data acquired from nine subjects showed an average overall boundary error of 4.22 μm as compared to 6.02 μm using the original algorithm.
Stark, Jeffrey G; Engelking, Dorothy; McMahen, Russ; Sikes, Carolyn
2016-09-01
In this pharmacokinetic (PK) study in healthy adults, we sought to: (1) compare the PK properties of a novel amphetamine extended-release orally disintegrating tablet formulation (Adzenys XR-ODT™ [AMP XR-ODT]) to a reference extended-release mixed amphetamine salts (MAS ER) formulation and (2) assess the effect of food on AMP XR-ODT. Forty-two adults were enrolled in a single-dose, open-label, 3-period, 3-treatment, randomized crossover study and received an 18.8-mg dose of AMP XR-ODT (fasted or fed) or equivalent dose (30 mg) of MAS ER (fasted). Plasma samples were analyzed for d-and l-amphetamine. Maximum plasma concentration (Cmax), time to maximum plasma concentration (Tmax), elimination half-life (T1/2), area under the concentration-time curve from time zero to last quantifiable concentration (AUClast), from time zero to infinity (AUCinf), relevant partial AUCs, and weight-normalized clearance (CL/F/kg) were assessed. The PK parameters were compared across treatments using an ANOVA. Safety was also assessed. A total of 39 adults completed this study. The geometric mean ratios (90% confidence interval [CI]) for AMP XR-ODT/MAS ER Cmax, AUC5-last, AUClast, and AUCinf were within 80%-125% for both d-and l-amphetamine. The 90% CIs for AUC0-5 were slightly below the 80%-125% range. When AMP XR-ODT was administered with food, there was a slight decrease in the d-and l-amphetamine Cmax and approximately a 2-hour delay in Tmax. The most common adverse events reported (>5% of participants) were dry mouth, palpitations, nausea, dizziness, headache, anxiety, and nasal congestion. AMP XR-ODT displayed a PK profile similar to MAS ER, and no clinically relevant food effect was observed.
Studies of the DIII-D disruption database using Machine Learning algorithms
NASA Astrophysics Data System (ADS)
Rea, Cristina; Granetz, Robert; Meneghini, Orso
2017-10-01
A Random Forests Machine Learning algorithm, trained on a large database of both disruptive and non-disruptive DIII-D discharges, predicts disruptive behavior in DIII-D with about 90% of accuracy. Several algorithms have been tested and Random Forests was found superior in performances for this particular task. Over 40 plasma parameters are included in the database, with data for each of the parameters taken from 500k time slices. We focused on a subset of non-dimensional plasma parameters, deemed to be good predictors based on physics considerations. Both binary (disruptive/non-disruptive) and multi-label (label based on the elapsed time before disruption) classification problems are investigated. The Random Forests algorithm provides insight on the available dataset by ranking the relative importance of the input features. It is found that q95 and Greenwald density fraction (n/nG) are the most relevant parameters for discriminating between DIII-D disruptive and non-disruptive discharges. A comparison with the Gradient Boosted Trees algorithm is shown and the first results coming from the application of regression algorithms are presented. Work supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0014264 and DE-FG02-95ER54309.
Park, Sung Kyun; Zhao, Zhangchen; Mukherjee, Bhramar
2017-09-26
There is growing concern of health effects of exposure to pollutant mixtures. We initially proposed an Environmental Risk Score (ERS) as a summary measure to examine the risk of exposure to multi-pollutants in epidemiologic research considering only pollutant main effects. We expand the ERS by consideration of pollutant-pollutant interactions using modern machine learning methods. We illustrate the multi-pollutant approaches to predicting a marker of oxidative stress (gamma-glutamyl transferase (GGT)), a common disease pathway linking environmental exposure and numerous health endpoints. We examined 20 metal biomarkers measured in urine or whole blood from 6 cycles of the National Health and Nutrition Examination Survey (NHANES 2003-2004 to 2013-2014, n = 9664). We randomly split the data evenly into training and testing sets and constructed ERS's of metal mixtures for GGT using adaptive elastic-net with main effects and pairwise interactions (AENET-I), Bayesian additive regression tree (BART), Bayesian kernel machine regression (BKMR), and Super Learner in the training set and evaluated their performances in the testing set. We also evaluated the associations between GGT-ERS and cardiovascular endpoints. ERS based on AENET-I performed better than other approaches in terms of prediction errors in the testing set. Important metals identified in relation to GGT include cadmium (urine), dimethylarsonic acid, monomethylarsonic acid, cobalt, and barium. All ERS's showed significant associations with systolic and diastolic blood pressure and hypertension. For hypertension, one SD increase in each ERS from AENET-I, BART and SuperLearner were associated with odds ratios of 1.26 (95% CI, 1.15, 1.38), 1.17 (1.09, 1.25), and 1.30 (1.20, 1.40), respectively. ERS's showed non-significant positive associations with mortality outcomes. ERS is a useful tool for characterizing cumulative risk from pollutant mixtures, with accounting for statistical challenges such as high degrees of correlations and pollutant-pollutant interactions. ERS constructed for an intermediate marker like GGT is predictive of related disease endpoints.
Parhami, Parisa; Pourhashemi, Seyed Jalal; Ghandehari, Mehdi; Mighani, Ghasem; Chiniforush, Nasim
2014-01-01
Introduction: The aim of this study was to evaluate and compare the in vitro effect of the Erbium-Doped Yttrium Aluminum Garnet (Er:YAG) laser with different radiation distances and high-speed rotary treatment on the shear bond strength of flowable composite to enamel of human permanent posterior teeth. Methods: freshly extracted human molar teeth with no caries or other surface defects were used in this study (n=45). The teeth were randomly divided into 3 groups. Group 1: treated with non-contact Er:YAG Laser and etched with Er:YAG laser, Group 2: treated with contact Er:YAG Laser and etched with Er:YAG laser, Group 3 (control): treated with diamond fissure bur and etched with acid phosphoric 37%. Then the adhesive was applied on the surafces of the teeth and polymerized using a curing light appliance. Resin cylinders were fabricated from flowable composite. Shear bond strength was tested at a crosshead speed of 0.5 mm/min. Results: The amount of Shear Bond Strength (SBS) in the 3 treatment groups was not the same (P<0.05).The group in which enamel surfaces were treated with diamond fissure bur and etched with acid (conrtol group) had the highest mean shear bond strength (19.92±4.76) and the group in which the enamel surfaces were treated with contact Er:YAG laser and etched with Er:YAG laser had the lowest mean shear bond strength (10.89±2.89). Mann-whitney test with adjusted P-value detected significant difference in shear bond strength between the control group and the other 2 groups (P < 0.05). Conclusion: It was concluded that both contact and non-contact Er:YAG laser treatment reduced shear bond strength of flowable resin composite to enamel in comparison with conventional treatment with high speed rotary. Different Er:YAG laser distance irradiations did not influence the shear bond strength of flowable composite to enamel. PMID:25653813
Mirhashemi, Amir Hossein; Chiniforush, Nasim; Sharifi, Nastaran; Hosseini, Amir Mehdi
2018-05-01
Several techniques have been proposed to obtain a durable bond, and the efficacy of these techniques is assessed by measuring parameters such as bond strength. Laser has provided a bond strength as high as that of acid etching in vitro and has simpler use with shorter clinical time compared to acid etching. This study aimed to compare the efficacy of Er:YAG and Er,Cr:YSGG lasers for etching and bonding of composite to orthodontic brackets. No previous study has evaluated the effect of these particular types of laser. A total of 70 composite blocks were randomly divided into five groups (n = 14): group 1, etching with phosphoric acid for 20 s; group 2, Er:YAG laser irradiation with 2 W power for 10 s; group 3, Er:YAG laser with 3 W power for 10 s; group 4, Er,Cr:YSGG laser with 2 W power for 10 s; group 5, Er,Cr:YSGG laser with 3 W power for 10 s. Metal brackets were then bonded to composites, and after 5000 thermal cycles, they were subjected to shear bond strength test in a universal testing machine after 24 h of water storage. One sample of each group was evaluated under a scanning electron microscope (SEM) to assess changes in composite surface after etching. The adhesive remnant index (ARI) was calculated under a stereomicroscope. Data were statistically analyzed. The mean and standard deviation of shear bond strength were 18.65 ± 3.36, 19.68 ± 5.34, 21.31 ± 4.03, 17.38 ± 6.94, and 16.45 ± 4.26 MPa in groups 1-5, respectively. The ARI scores showed that the bond failure mode in all groups was mainly mixed. The groups were not significantly different in terms of shear bond strength. Er:YAG and Er,Cr:YSGG lasers with the mentioned parameters yield optimal shear bond strength and can be used as an alternative to acid etching for bracket bond to composite.
NASA Astrophysics Data System (ADS)
Simkin, M. V.; Roychowdhury, V. P.
2011-05-01
Scientists often re-invent things that were long known. Here we review these activities as related to the mechanism of producing power law distributions, originally proposed in 1922 by Yule to explain experimental data on the sizes of biological genera, collected by Willis. We also review the history of re-invention of closely related branching processes, random graphs and coagulation models.
A Quantitative Methodology for Vetting Dark Network Intelligence Sources for Social Network Analysis
2012-06-01
first algorithm by Erdös and Rényi (Erdös & Renyi , 1959). This earliest algorithm suffers from the fact that its degree distribution is not scale...Fundamental Media Understanding. Norderstedt: atpress. Erdös, P., & Renyi , A. (1959). On random graphs. Publicationes Mathematicae , 6, 290- 297. Erdös, P
ERIC Educational Resources Information Center
Koenig, Elizabeth A.; Eckert, Tanya L.; Hier, Bridget O.
2016-01-01
Although performance feedback interventions successfully lead to improvements in students' performance, research suggests that the combination of feedback and goal setting leads to greater performance than either component alone and that graphing performance in relation to a goal can lead to improvements in academic performance. The goal of the…
Optimum target sizes for a sequential sawing process
H. Dean Claxton
1972-01-01
A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...
NASA Astrophysics Data System (ADS)
Lacasa, Lucas
2014-09-01
Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.
Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R
2012-01-01
In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.
Friston, Karl J.; Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E.
2011-01-01
This paper is about inferring or discovering the functional architecture of distributed systems using Dynamic Causal Modelling (DCM). We describe a scheme that recovers the (dynamic) Bayesian dependency graph (connections in a network) using observed network activity. This network discovery uses Bayesian model selection to identify the sparsity structure (absence of edges or connections) in a graph that best explains observed time-series. The implicit adjacency matrix specifies the form of the network (e.g., cyclic or acyclic) and its graph-theoretical attributes (e.g., degree distribution). The scheme is illustrated using functional magnetic resonance imaging (fMRI) time series to discover functional brain networks. Crucially, it can be applied to experimentally evoked responses (activation studies) or endogenous activity in task-free (resting state) fMRI studies. Unlike conventional approaches to network discovery, DCM permits the analysis of directed and cyclic graphs. Furthermore, it eschews (implausible) Markovian assumptions about the serial independence of random fluctuations. The scheme furnishes a network description of distributed activity in the brain that is optimal in the sense of having the greatest conditional probability, relative to other networks. The networks are characterised in terms of their connectivity or adjacency matrices and conditional distributions over the directed (and reciprocal) effective connectivity between connected nodes or regions. We envisage that this approach will provide a useful complement to current analyses of functional connectivity for both activation and resting-state studies. PMID:21182971