Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Protein and gene model inference based on statistical modeling in k-partite graphs.
Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter
2010-07-06
One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.
Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie
2007-11-01
Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Overlap corrections for emissivity calculations of H2O-CO2-CO-N2 mixtures
NASA Astrophysics Data System (ADS)
Alberti, Michael; Weber, Roman; Mancini, Marco
2018-01-01
Calculations of total gas emissivities of gas mixtures containing several radiatively active species require corrections for band overlapping. In this paper, we generate such overlap correction charts for H2O-CO2-N2, H2O-CO-N2, and CO2-CO-N2 mixtures. These charts are applicable in the 0.1-40 bar total pressure range and in the 500 K-2500 K temperature range. For H2O-CO2-N2 mixtures, differences between our charts and Hottel's graphs as well as models of Leckner and Modak are highlighted and analyzed.
Lee, Hansang; Hong, Helen; Kim, Junmo
2014-12-01
We propose a graph-cut-based segmentation method for the anterior cruciate ligament (ACL) in knee MRI with a novel shape prior and label refinement. As the initial seeds for graph cuts, candidates for the ACL and the background are extracted from knee MRI roughly by means of adaptive thresholding with Gaussian mixture model fitting. The extracted ACL candidate is segmented iteratively by graph cuts with patient-specific shape constraints. Two shape constraints termed fence and neighbor costs are suggested such that the graph cuts prevent any leakage into adjacent regions with similar intensity. The segmented ACL label is refined by means of superpixel classification. Superpixel classification makes the segmented label propagate into missing inhomogeneous regions inside the ACL. In the experiments, the proposed method segmented the ACL with Dice similarity coefficient of 66.47±7.97%, average surface distance of 2.247±0.869, and root mean squared error of 3.538±1.633, which increased the accuracy by 14.8%, 40.3%, and 37.6% from the Boykov model, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Decision net, directed graph, and neural net processing of imaging spectrometer data
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki; Barnard, Etienne
1989-01-01
A decision-net solution involving a novel hierarchical classifier and a set of multiple directed graphs, as well as a neural-net solution, are respectively presented for large-class problem and mixture problem treatments of imaging spectrometer data. The clustering method for hierarchical classifier design, when used with multiple directed graphs, yields an efficient decision net. New directed-graph rules for reducing local maxima as well as the number of perturbations required, and the new starting-node rules for extending the reachability and reducing the search time of the graphs, are noted to yield superior results, as indicated by an illustrative 500-class imaging spectrometer problem.
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
The physical model for research of behavior of grouting mixtures
NASA Astrophysics Data System (ADS)
Hajovsky, Radovan; Pies, Martin; Lossmann, Jaroslav
2016-06-01
The paper deals with description of physical model designed for verification of behavior of grouting mixtures when applied below underground water level. Described physical model has been set up to determine propagation of grouting mixture in a given environment. Extension of grouting in this environment is based on measurement of humidity and temperature with the use of combined sensors located within preinstalled special measurement probes around grouting needle. Humidity was measured by combined capacity sensor DTH-1010, temperature was gathered by a NTC thermistor. Humidity sensors measured time when grouting mixture reached sensor location point. NTC thermistors measured temperature changes in time starting from initial of injection. This helped to develop 3D map showing the distribution of grouting mixture through the environment. Accomplishment of this particular measurement was carried out by a designed primary measurement module capable of connecting 4 humidity and temperature sensors. This module also takes care of converting these physical signals into unified analogue signals consequently brought to the input terminals of analogue input of programmable automation controller (PAC) WinPAC-8441. This controller ensures the measurement itself, archiving and visualization of all data. Detail description of a complex measurement system and evaluation in form of 3D animations and graphs is supposed to be in a full paper.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
Inference of population splits and mixtures from genome-wide allele frequency data.
Pickrell, Joseph K; Pritchard, Jonathan K
2012-01-01
Many aspects of the historical relationships between populations in a species are reflected in genetic data. Inferring these relationships from genetic data, however, remains a challenging task. In this paper, we present a statistical model for inferring the patterns of population splits and mixtures in multiple populations. In our model, the sampled populations in a species are related to their common ancestor through a graph of ancestral populations. Using genome-wide allele frequency data and a Gaussian approximation to genetic drift, we infer the structure of this graph. We applied this method to a set of 55 human populations and a set of 82 dog breeds and wild canids. In both species, we show that a simple bifurcating tree does not fully describe the data; in contrast, we infer many migration events. While some of the migration events that we find have been detected previously, many have not. For example, in the human data, we infer that Cambodians trace approximately 16% of their ancestry to a population ancestral to other extant East Asian populations. In the dog data, we infer that both the boxer and basenji trace a considerable fraction of their ancestry (9% and 25%, respectively) to wolves subsequent to domestication and that East Asian toy breeds (the Shih Tzu and the Pekingese) result from admixture between modern toy breeds and "ancient" Asian breeds. Software implementing the model described here, called TreeMix, is available at http://treemix.googlecode.com.
Navigability of Random Geometric Graphs in the Universe and Other Spacetimes.
Cunningham, William; Zuev, Konstantin; Krioukov, Dmitri
2017-08-18
Random geometric graphs in hyperbolic spaces explain many common structural and dynamical properties of real networks, yet they fail to predict the correct values of the exponents of power-law degree distributions observed in real networks. In that respect, random geometric graphs in asymptotically de Sitter spacetimes, such as the Lorentzian spacetime of our accelerating universe, are more attractive as their predictions are more consistent with observations in real networks. Yet another important property of hyperbolic graphs is their navigability, and it remains unclear if de Sitter graphs are as navigable as hyperbolic ones. Here we study the navigability of random geometric graphs in three Lorentzian manifolds corresponding to universes filled only with dark energy (de Sitter spacetime), only with matter, and with a mixture of dark energy and matter. We find these graphs are navigable only in the manifolds with dark energy. This result implies that, in terms of navigability, random geometric graphs in asymptotically de Sitter spacetimes are as good as random hyperbolic graphs. It also establishes a connection between the presence of dark energy and navigability of the discretized causal structure of spacetime, which provides a basis for a different approach to the dark energy problem in cosmology.
ERIC Educational Resources Information Center
Smith, Robert L.; Popham, Ronald E.
1983-01-01
Presents an experiment in thermometric titration used in an analytic chemistry-chemical instrumentation course, consisting of two titrations, one a mixture of calcium and magnesium, the other of calcium, magnesium, and barium ions. Provides equipment and solutions list/specifications, graphs, and discussion of results. (JM)
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
The Hydroxyl Radical Reaction Rate Constant and Products of Cyclohexanol
2007-10-01
Analysis Samples from kinetic studies were quantitativelymon- itored using a Hewlett-Packard (HP) gas chromato- graph (GC) 5890 with a flame ionization...excluded from the reaction mixture and the COL concentration was approximately doubled (4.9–9 ppm). Product Study Analysis Reactant mixtures and standards...from product identi- fication experiments were sampled by exposing a 100% polydimethylsiloxane solid phase microextrac- tion fiber (SPME) in the
Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng
2015-12-01
Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.
Rheology behaviour of modified silicone-dammar as a natural resin coating
NASA Astrophysics Data System (ADS)
Zakaria, Rosnah; Ahmad, Azizah Hanom
2015-08-01
Modified silicone-dammar (SD) was prepared by various weight percent from 5 - 45 wt% of dammar added. The n-value (viscosity index) of silicone with 5 and 10 % were turn to be 1.6 and 1.3 of viscosity index. While 15, 20, 25 and 30 wt% of dammar added gave 0.7, 0.3, 0.2 and 0.1 of viscosity index. On the other hand, 35, 40 and 45 wt% of dammar gave a fixed value of viscosity index of 0.03. This n-value shows the dispersion quality of paint mixture indicates that the modified silicone-dammar was followed the Bingham's Model. The rheology measurement of SD mixture was analysed by plotting ln shear stress vs shear rate value. Analysis of the graph showed a Bingham plastic model with regression R2 equivalent to 0.99. The linear viscoelastic behaviour of SD samples increased in parallel with increasing dammar content indicate that the suspension of dammar in silicone resin could flow steadily with time giving a pseudoplastic behaviour.
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-03-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-01-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974
Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs
2014-06-01
comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree
Information visualisation based on graph models
NASA Astrophysics Data System (ADS)
Kasyanov, V. N.; Kasyanova, E. V.
2013-05-01
Information visualisation is a key component of support tools for many applications in science and engineering. A graph is an abstract structure that is widely used to model information for its visualisation. In this paper, we consider practical and general graph formalism called hierarchical graphs and present the Higres and Visual Graph systems aimed at supporting information visualisation on the base of hierarchical graph models.
Exact solution for the time evolution of network rewiring models
NASA Astrophysics Data System (ADS)
Evans, T. S.; Plato, A. D. K.
2007-05-01
We consider the rewiring of a bipartite graph using a mixture of random and preferential attachment. The full mean-field equations for the degree distribution and its generating function are given. The exact solution of these equations for all finite parameter values at any time is found in terms of standard functions. It is demonstrated that these solutions are an excellent fit to numerical simulations of the model. We discuss the relationship between our model and several others in the literature, including examples of urn, backgammon, and balls-in-boxes models, the Watts and Strogatz rewiring problem, and some models of zero range processes. Our model is also equivalent to those used in various applications including cultural transmission, family name and gene frequencies, glasses, and wealth distributions. Finally some Voter models and an example of a minority game also show features described by our model.
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
NASA Astrophysics Data System (ADS)
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
NASA Astrophysics Data System (ADS)
Christian, Wolfgang; Belloni, Mario
2013-04-01
We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.
Hegarty, Peter; Lemieux, Anthony F; McQueen, Grant
2010-03-01
Graphs seem to connote facts more than words or tables do. Consequently, they seem unlikely places to spot implicit sexism at work. Yet, in 6 studies (N = 741), women and men constructed (Study 1) and recalled (Study 2) gender difference graphs with men's data first, and graphed powerful groups (Study 3) and individuals (Study 4) ahead of weaker ones. Participants who interpreted graph order as evidence of author "bias" inferred that the author graphed his or her own gender group first (Study 5). Women's, but not men's, preferences to graph men first were mitigated when participants graphed a difference between themselves and an opposite-sex friend prior to graphing gender differences (Study 6). Graph production and comprehension are affected by beliefs and suppositions about the groups represented in graphs to a greater degree than cognitive models of graph comprehension or realist models of scientific thinking have yet acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
System analysis through bond graph modeling
NASA Astrophysics Data System (ADS)
McBride, Robert Thomas
2005-07-01
Modeling and simulation form an integral role in the engineering design process. An accurate mathematical description of a system provides the design engineer the flexibility to perform trade studies quickly and accurately to expedite the design process. Most often, the mathematical model of the system contains components of different engineering disciplines. A modeling methodology that can handle these types of systems might be used in an indirect fashion to extract added information from the model. This research examines the ability of a modeling methodology to provide added insight into system analysis and design. The modeling methodology used is bond graph modeling. An investigation into the creation of a bond graph model using the Lagrangian of the system is provided. Upon creation of the bond graph, system analysis is performed. To aid in the system analysis, an object-oriented approach to bond graph modeling is introduced. A framework is provided to simulate the bond graph directly. Through object-oriented simulation of a bond graph, the information contained within the bond graph can be exploited to create a measurement of system efficiency. A definition of system efficiency is given. This measurement of efficiency is used in the design of different controllers of varying architectures. Optimal control of a missile autopilot is discussed within the framework of the calculated system efficiency.
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
NASA Astrophysics Data System (ADS)
Benedetto, J.; Cloninger, A.; Czaja, W.; Doster, T.; Kochersberger, K.; Manning, B.; McCullough, T.; McLane, M.
2014-05-01
Successful performance of radiological search mission is dependent on effective utilization of mixture of signals. Examples of modalities include, e.g., EO imagery and gamma radiation data, or radiation data collected during multiple events. In addition, elevation data or spatial proximity can be used to enhance the performance of acquisition systems. State of the art techniques in processing and exploitation of complex information manifolds rely on diffusion operators. Our approach involves machine learning techniques based on analysis of joint data- dependent graphs and their associated diffusion kernels. Then, the significant eigenvectors of the derived fused graph Laplace and Schroedinger operators form the new representation, which provides integrated features from the heterogeneous input data. The families of data-dependent Laplace and Schroedinger operators on joint data graphs, shall be integrated by means of appropriately designed fusion metrics. These fused representations are used for target and anomaly detection.
Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang
2013-01-01
Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.
NASA Astrophysics Data System (ADS)
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
Using graph approach for managing connectivity in integrative landscape modelling
NASA Astrophysics Data System (ADS)
Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger
2013-04-01
In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). OpenFLUID-landr library has been developed in order i) to be used with no GIS expert skills needed (common gis formats can be read and simplified spatial management is provided), ii) to easily develop adapted rules of landscape discretization and graph creation to follow spatialized model requirements and iii) to allow model developers to manage dynamic and complex spatial topology. Graph management in OpenFLUID are shown with i) examples of hydrological modelizations on complex farmed landscapes and ii) the new implementation of Geo-MHYDAS tool based on the OpenFLUID-landr library, which allows to discretize a landscape and create graph structure for the MHYDAS model requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph C.; McLendon, William Clarence,
2013-01-01
Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report amore » preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.« less
Convergence of the Graph Allen-Cahn Scheme
NASA Astrophysics Data System (ADS)
Luo, Xiyang; Bertozzi, Andrea L.
2017-05-01
The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.
Serang, Oliver; Noble, William Stafford
2012-01-01
The problem of identifying the proteins in a complex mixture using tandem mass spectrometry can be framed as an inference problem on a graph that connects peptides to proteins. Several existing protein identification methods make use of statistical inference methods for graphical models, including expectation maximization, Markov chain Monte Carlo, and full marginalization coupled with approximation heuristics. We show that, for this problem, the majority of the cost of inference usually comes from a few highly connected subgraphs. Furthermore, we evaluate three different statistical inference methods using a common graphical model, and we demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods. The python code used for this paper is available at http://noble.gs.washington.edu/proj/fido. PMID:22331862
Hierarchical graphs for rule-based modeling of biochemical systems
2011-01-01
Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR) complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for specifying rule-based models, such as the BioNetGen language (BNGL). Thus, the proposed use of hierarchical graphs should promote clarity and better understanding of rule-based models. PMID:21288338
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Graph wavelet alignment kernels for drug virtual screening.
Smalter, Aaron; Huan, Jun; Lushington, Gerald
2009-06-01
In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.
Mining and Indexing Graph Databases
ERIC Educational Resources Information Center
Yuan, Dayu
2013-01-01
Graphs are widely used to model structures and relationships of objects in various scientific and commercial fields. Chemical molecules, proteins, malware system-call dependencies and three-dimensional mechanical parts are all modeled as graphs. In this dissertation, we propose to mine and index those graph data to enable fast and scalable search.…
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Bipartite graphs as models of population structures in evolutionary multiplayer games.
Peña, Jorge; Rochat, Yannick
2012-01-01
By combining evolutionary game theory and graph theory, "games on graphs" study the evolutionary dynamics of frequency-dependent selection in population structures modeled as geographical or social networks. Networks are usually represented by means of unipartite graphs, and social interactions by two-person games such as the famous prisoner's dilemma. Unipartite graphs have also been used for modeling interactions going beyond pairwise interactions. In this paper, we argue that bipartite graphs are a better alternative to unipartite graphs for describing population structures in evolutionary multiplayer games. To illustrate this point, we make use of bipartite graphs to investigate, by means of computer simulations, the evolution of cooperation under the conventional and the distributed N-person prisoner's dilemma. We show that several implicit assumptions arising from the standard approach based on unipartite graphs (such as the definition of replacement neighborhoods, the intertwining of individual and group diversity, and the large overlap of interaction neighborhoods) can have a large impact on the resulting evolutionary dynamics. Our work provides a clear example of the importance of construction procedures in games on graphs, of the suitability of bigraphs and hypergraphs for computational modeling, and of the importance of concepts from social network analysis such as centrality, centralization and bipartite clustering for the understanding of dynamical processes occurring on networked population structures.
On Edge Exchangeable Random Graphs
NASA Astrophysics Data System (ADS)
Janson, Svante
2017-06-01
We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).
Statistically significant relational data mining :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less
Dynamic graph of an oxy-fuel combustion system using autocatalytic set model
NASA Astrophysics Data System (ADS)
Harish, Noor Ainy; Bakar, Sumarni Abu
2017-08-01
Evaporation process is one of the main processes besides combustion process in an oxy-combustion boiler system. An Autocatalytic Set (ASC) Model has successfully applied in developing graphical representation of the chemical reactions that occurs in the evaporation process in the system. Seventeen variables identified in the process are represented as nodes and the catalytic relationships are represented as edges in the graph. In addition, in this paper graph dynamics of ACS is further investigated. By using Dynamic Autocatalytic Set Graph Algorithm (DAGA), the adjacency matrix for each of the graphs and its relations to Perron-Frobenius Theorem is investigated. The dynamic graph obtained is further investigated where the connection of the graph to fuzzy graph Type 1 is established.
The graph neural network model.
Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele
2009-01-01
Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.
Graph theoretical model of a sensorimotor connectome in zebrafish.
Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan
2012-01-01
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
Equity trees and graphs via information theory
NASA Astrophysics Data System (ADS)
Harré, M.; Bossomaier, T.
2010-01-01
We investigate the similarities and differences between two measures of the relationship between equities traded in financial markets. Our measures are the correlation coefficients and the mutual information. In the context of financial markets correlation coefficients are well established whereas mutual information has not previously been as well studied despite its theoretically appealing properties. We show that asset trees which are derived from either the correlation coefficients or the mutual information have a mixture of both similarities and differences at the individual equity level and at the macroscopic level. We then extend our consideration from trees to graphs using the "genus 0" condition recently introduced in order to study the networks of equities.
Refractometry for quality control of anesthetic drug mixtures.
Stabenow, Jennifer M; Maske, Mindy L; Vogler, George A
2006-07-01
Injectable anesthetic drugs used in rodents are often mixed and further diluted to increase the convenience and accuracy of dosing. We evaluated clinical refractometry as a simple and rapid method of quality control and mixing error detection of rodent anesthetic or analgesic mixtures. Dilutions of ketamine, xylazine, acepromazine, and buprenorphine were prepared with reagent-grade water to produce at least 4 concentration levels. The refraction of each concentration then was measured with a clinical refractometer and plotted against the percentage of stock concentration. The resulting graphs were linear and could be used to determine the concentration of single-drug dilutions or to predict the refraction of drug mixtures. We conclude that refractometry can be used to assess the concentration of dilutions of single drugs and can verify the mixing accuracy of drug combinations when the components of the mixture are known and fall within the detection range of the instrument.
Multi-Level Anomaly Detection on Time-Varying Graph Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A; Collins, John P; Ferragut, Erik M
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1993-01-01
Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Markerless video analysis for movement quantification in pediatric epilepsy monitoring.
Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling
2011-01-01
This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
GraQL: A Query Language for High-Performance Attributed Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Castellana, Vito G.; Morari, Alessandro
Graph databases have gained increasing interest in the last few years due to the emergence of data sources which are not easily analyzable in traditional relational models or for which a graph data model is the natural representation. In order to understand the design and implementation choices for an attributed graph database backend and query language, we have started to design our infrastructure for attributed graph databases. In this paper, we describe the design considerations of our in-memory attributed graph database system with a particular focus on the data definition and query language components.
The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-17
A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also http://www.genostar.org.
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo1, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-01
Background A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. Results GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. Conclusion GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also . PMID:16417636
ERIC Educational Resources Information Center
Tyner, Bryan C.; Fienup, Daniel M.
2015-01-01
Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance.…
F-RAG: Generating Atomic Coordinates from RNA Graphs by Fragment Assembly.
Jain, Swati; Schlick, Tamar
2017-11-24
Coarse-grained models represent attractive approaches to analyze and simulate ribonucleic acid (RNA) molecules, for example, for structure prediction and design, as they simplify the RNA structure to reduce the conformational search space. Our structure prediction protocol RAGTOP (RNA-As-Graphs Topology Prediction) represents RNA structures as tree graphs and samples graph topologies to produce candidate graphs. However, for a more detailed study and analysis, construction of atomic from coarse-grained models is required. Here we present our graph-based fragment assembly algorithm (F-RAG) to convert candidate three-dimensional (3D) tree graph models, produced by RAGTOP into atomic structures. We use our related RAG-3D utilities to partition graphs into subgraphs and search for structurally similar atomic fragments in a data set of RNA 3D structures. The fragments are edited and superimposed using common residues, full atomic models are scored using RAGTOP's knowledge-based potential, and geometries of top scoring models is optimized. To evaluate our models, we assess all-atom RMSDs and Interaction Network Fidelity (a measure of residue interactions) with respect to experimentally solved structures and compare our results to other fragment assembly programs. For a set of 50 RNA structures, we obtain atomic models with reasonable geometries and interactions, particularly good for RNAs containing junctions. Additional improvements to our protocol and databases are outlined. These results provide a good foundation for further work on RNA structure prediction and design applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
The 1/ N Expansion of Tensor Models with Two Symmetric Tensors
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2018-06-01
It is well known that tensor models for a tensor with no symmetry admit a 1/ N expansion dominated by melonic graphs. This result relies crucially on identifying jackets, which are globally defined ribbon graphs embedded in the tensor graph. In contrast, no result of this kind has so far been established for symmetric tensors because global jackets do not exist. In this paper we introduce a new approach to the 1/ N expansion in tensor models adapted to symmetric tensors. In particular we do not use any global structure like the jackets. We prove that, for any rank D, a tensor model with two symmetric tensors and interactions the complete graph K D+1 admits a 1/ N expansion dominated by melonic graphs.
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...
2016-01-01
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Large-scale DCMs for resting-state fMRI.
Razi, Adeel; Seghier, Mohamed L; Zhou, Yuan; McColgan, Peter; Zeidman, Peter; Park, Hae-Jeong; Sporns, Olaf; Rees, Geraint; Friston, Karl J
2017-01-01
This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity . This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI). We use spectral dynamic causal modeling (DCM) to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of to discover the most likely sparse graph (or model) from a parent (e.g., fully connected) graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity) graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM-with functional connectivity priors-is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1995-01-01
Intelligent systems require software incorporating probabilistic reasoning, and often times learning. Networks provide a framework and methodology for creating this kind of software. This paper introduces network models based on chain graphs with deterministic nodes. Chain graphs are defined as a hierarchical combination of Bayesian and Markov networks. To model learning, plates on chain graphs are introduced to model independent samples. The paper concludes by discussing various operations that can be performed on chain graphs with plates as a simplification process or to generate learning algorithms.
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.
2017-12-12
smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found
Bipartite Graphs as Models of Population Structures in Evolutionary Multiplayer Games
Peña, Jorge; Rochat, Yannick
2012-01-01
By combining evolutionary game theory and graph theory, “games on graphs” study the evolutionary dynamics of frequency-dependent selection in population structures modeled as geographical or social networks. Networks are usually represented by means of unipartite graphs, and social interactions by two-person games such as the famous prisoner’s dilemma. Unipartite graphs have also been used for modeling interactions going beyond pairwise interactions. In this paper, we argue that bipartite graphs are a better alternative to unipartite graphs for describing population structures in evolutionary multiplayer games. To illustrate this point, we make use of bipartite graphs to investigate, by means of computer simulations, the evolution of cooperation under the conventional and the distributed N-person prisoner’s dilemma. We show that several implicit assumptions arising from the standard approach based on unipartite graphs (such as the definition of replacement neighborhoods, the intertwining of individual and group diversity, and the large overlap of interaction neighborhoods) can have a large impact on the resulting evolutionary dynamics. Our work provides a clear example of the importance of construction procedures in games on graphs, of the suitability of bigraphs and hypergraphs for computational modeling, and of the importance of concepts from social network analysis such as centrality, centralization and bipartite clustering for the understanding of dynamical processes occurring on networked population structures. PMID:22970237
Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui
2014-05-01
To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.
Applications of graph theory in protein structure identification
2011-01-01
There is a growing interest in the identification of proteins on the proteome wide scale. Among different kinds of protein structure identification methods, graph-theoretic methods are very sharp ones. Due to their lower costs, higher effectiveness and many other advantages, they have drawn more and more researchers’ attention nowadays. Specifically, graph-theoretic methods have been widely used in homology identification, side-chain cluster identification, peptide sequencing and so on. This paper reviews several methods in solving protein structure identification problems using graph theory. We mainly introduce classical methods and mathematical models including homology modeling based on clique finding, identification of side-chain clusters in protein structures upon graph spectrum, and de novo peptide sequencing via tandem mass spectrometry using the spectrum graph model. In addition, concluding remarks and future priorities of each method are given. PMID:22165974
Automatic determination of fault effects on aircraft functionality
NASA Technical Reports Server (NTRS)
Feyock, Stefan
1989-01-01
The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.
Mathematical modeling of the malignancy of cancer using graph evolution.
Gunduz-Demir, Cigdem
2007-10-01
We report a novel computational method based on graph evolution process to model the malignancy of brain cancer called glioma. In this work, we analyze the phases that a graph passes through during its evolution and demonstrate strong relation between the malignancy of cancer and the phase of its graph. From the photomicrographs of tissues, which are diagnosed as normal, low-grade cancerous and high-grade cancerous, we construct cell-graphs based on the locations of cells; we probabilistically generate an edge between every pair of cells depending on the Euclidean distance between them. For a cell-graph, we extract connectivity information including the properties of its connected components in order to analyze the phase of the cell-graph. Working with brain tissue samples surgically removed from 12 patients, we demonstrate that cell-graphs generated for different tissue types evolve differently and that they exhibit different phase properties, which distinguish a tissue type from another.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Graph theory as a proxy for spatially explicit population models in conservation planning.
Minor, Emily S; Urban, Dean L
2007-09-01
Spatially explicit population models (SEPMs) are often considered the best way to predict and manage species distributions in spatially heterogeneous landscapes. However, they are computationally intensive and require extensive knowledge of species' biology and behavior, limiting their application in many cases. An alternative to SEPMs is graph theory, which has minimal data requirements and efficient algorithms. Although only recently introduced to landscape ecology, graph theory is well suited to ecological applications concerned with connectivity or movement. This paper compares the performance of graph theory to a SEPM in selecting important habitat patches for Wood Thrush (Hylocichla mustelina) conservation. We use both models to identify habitat patches that act as population sources and persistent patches and also use graph theory to identify patches that act as stepping stones for dispersal. Correlations of patch rankings were very high between the two models. In addition, graph theory offers the ability to identify patches that are very important to habitat connectivity and thus long-term population persistence across the landscape. We show that graph theory makes very similar predictions in most cases and in other cases offers insight not available from the SEPM, and we conclude that graph theory is a suitable and possibly preferable alternative to SEPMs for species conservation in heterogeneous landscapes.
Metric learning with spectral graph convolutions on brain connectivity networks.
Ktena, Sofia Ira; Parisot, Sarah; Ferrante, Enzo; Rajchl, Martin; Lee, Matthew; Glocker, Ben; Rueckert, Daniel
2018-04-01
Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela
2014-01-01
The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
Simple graph models of information spread in finite populations
Voorhees, Burton; Ryder, Bergerud
2015-01-01
We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661
Automated Modeling and Simulation Using the Bond Graph Method for the Aerospace Industry
NASA Technical Reports Server (NTRS)
Granda, Jose J.; Montgomery, Raymond C.
2003-01-01
Bond graph modeling was originally developed in the late 1950s by the late Prof. Henry M. Paynter of M.I.T. Prof. Paynter acted well before his time as the main advantage of his creation, other than the modeling insight that it provides and the ability of effectively dealing with Mechatronics, came into fruition only with the recent advent of modern computer technology and the tools derived as a result of it, including symbolic manipulation, MATLAB, and SIMULINK and the Computer Aided Modeling Program (CAMPG). Thus, only recently have these tools been available allowing one to fully utilize the advantages that the bond graph method has to offer. The purpose of this paper is to help fill the knowledge void concerning its use of bond graphs in the aerospace industry. The paper first presents simple examples to serve as a tutorial on bond graphs for those not familiar with the technique. The reader is given the basic understanding needed to appreciate the applications that follow. After that, several aerospace applications are developed such as modeling of an arresting system for aircraft carrier landings, suspension models used for landing gears and multibody dynamics. The paper presents also an update on NASA's progress in modeling the International Space Station (ISS) using bond graph techniques, and an advanced actuation system utilizing shape memory alloys. The later covers the Mechatronics advantages of the bond graph method, applications that simultaneously involves mechanical, hydraulic, thermal, and electrical subsystem modeling.
An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional
NASA Astrophysics Data System (ADS)
van Gennip, Yves
2018-06-01
We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.
Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin
2018-01-01
We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.
Yan, Bo; Pan, Chongle; Olman, Victor N; Hettich, Robert L; Xu, Ying
2004-01-01
Mass spectrometry is one of the most popular analytical techniques for identification of individual proteins in a protein mixture, one of the basic problems in proteomics. It identifies a protein through identifying its unique mass spectral pattern. While the problem is theoretically solvable, it remains a challenging problem computationally. One of the key challenges comes from the difficulty in distinguishing the N- and C-terminus ions, mostly b- and y-ions respectively. In this paper, we present a graph algorithm for solving the problem of separating bfrom y-ions in a set of mass spectra. We represent each spectral peak as a node and consider two types of edges: a type-1 edge connects two peaks possibly of the same ion types and a type-2 edge connects two peaks possibly of different ion types, predicted based on local information. The ion-separation problem is then formulated and solved as a graph partition problem, which is to partition the graph into three subgraphs, namely b-, y-ions and others respectively, so to maximize the total weight of type-1 edges while minimizing the total weight of type-2 edges within each subgraph. We have developed a dynamic programming algorithm for rigorously solving this graph partition problem and implemented it as a computer program PRIME. We have tested PRIME on 18 data sets of high accurate FT-ICR tandem mass spectra and found that it achieved ~90% accuracy for separation of b- and y- ions.
Tyner, Bryan C; Fienup, Daniel M
2015-09-01
Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed. © Society for the Experimental Analysis of Behavior.
Collaborative mining and transfer learning for relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Eslami, Mohammed
2015-06-01
Many of the real-world problems, - including human knowledge, communication, biological, and cyber network analysis, - deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.
Graph Coloring Used to Model Traffic Lights.
ERIC Educational Resources Information Center
Williams, John
1992-01-01
Two scheduling problems, one involving setting up an examination schedule and the other describing traffic light problems, are modeled as colorings of graphs consisting of a set of vertices and edges. The chromatic number, the least number of colors necessary for coloring a graph, is employed in the solutions. (MDH)
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Algebraic approach to small-world network models
NASA Astrophysics Data System (ADS)
Rudolph-Lilith, Michelle; Muller, Lyle E.
2014-01-01
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Semantic graphs and associative memories
NASA Astrophysics Data System (ADS)
Pomi, Andrés; Mizraji, Eduardo
2004-12-01
Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.
Adjusting protein graphs based on graph entropy.
Peng, Sheng-Lung; Tsay, Yu-Wei
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid.
Adjusting protein graphs based on graph entropy
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid. PMID:25474347
Overlapping community detection based on link graph using distance dynamics
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Jing; Cai, Li-Jun
2018-01-01
The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.
High-Performance Data Analytics Beyond the Relational and Graph Data Models with GEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Minutoli, Marco; Bhatt, Shreyansh
Graphs represent an increasingly popular data model for data-analytics, since they can naturally represent relationships and interactions between entities. Relational databases and their pure table-based data model are not well suitable to store and process sparse data. Consequently, graph databases have gained interest in the last few years and the Resource Description Framework (RDF) became the standard data model for graph data. Nevertheless, while RDF is well suited to analyze the relationships between the entities, it is not efficient in representing their attributes and properties. In this work we propose the adoption of a new hybrid data model, based onmore » attributed graphs, that aims at overcoming the limitations of the pure relational and graph data models. We present how we have re-designed the GEMS data-analytics framework to fully take advantage of the proposed hybrid data model. To improve analysts productivity, in addition to a C++ API for applications development, we adopt GraQL as input query language. We validate our approach implementing a set of queries on net-flow data and we compare our framework performance against Neo4j. Experimental results show significant performance improvement over Neo4j, up to several orders of magnitude when increasing the size of the input data.« less
A new intrusion prevention model using planning knowledge graph
NASA Astrophysics Data System (ADS)
Cai, Zengyu; Feng, Yuan; Liu, Shuru; Gan, Yong
2013-03-01
Intelligent plan is a very important research in artificial intelligence, which has applied in network security. This paper proposes a new intrusion prevention model base on planning knowledge graph and discuses the system architecture and characteristics of this model. The Intrusion Prevention based on plan knowledge graph is completed by plan recognition based on planning knowledge graph, and the Intrusion response strategies and actions are completed by the hierarchical task network (HTN) planner in this paper. Intrusion prevention system has the advantages of intelligent planning, which has the advantage of the knowledge-sharing, the response focused, learning autonomy and protective ability.
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
QSPR modeling: graph connectivity indices versus line graph connectivity indices
Basak; Nikolic; Trinajstic; Amic; Beslo
2000-07-01
Five QSPR models of alkanes were reinvestigated. Properties considered were molecular surface-dependent properties (boiling points and gas chromatographic retention indices) and molecular volume-dependent properties (molar volumes and molar refractions). The vertex- and edge-connectivity indices were used as structural parameters. In each studied case we computed connectivity indices of alkane trees and alkane line graphs and searched for the optimum exponent. Models based on indices with an optimum exponent and on the standard value of the exponent were compared. Thus, for each property we generated six QSPR models (four for alkane trees and two for the corresponding line graphs). In all studied cases QSPR models based on connectivity indices with optimum exponents have better statistical characteristics than the models based on connectivity indices with the standard value of the exponent. The comparison between models based on vertex- and edge-connectivity indices gave in two cases (molar volumes and molar refractions) better models based on edge-connectivity indices and in three cases (boiling points for octanes and nonanes and gas chromatographic retention indices) better models based on vertex-connectivity indices. Thus, it appears that the edge-connectivity index is more appropriate to be used in the structure-molecular volume properties modeling and the vertex-connectivity index in the structure-molecular surface properties modeling. The use of line graphs did not improve the predictive power of the connectivity indices. Only in one case (boiling points of nonanes) a better model was obtained with the use of line graphs.
NASA Astrophysics Data System (ADS)
Alberti, Michael; Weber, Roman; Mancini, Marco
2017-10-01
The line-by-line procedure developed in the associated paper (Part A ) has been used to generate the total emissivity chart for pure CO and CO -N2 /air mixtures at 1 bar total pressure, in the 300 to 3000 K temperature and 0.01 to 3000 bar cm pressure path length range. Methods of scaling the emissivity to pressures different to 1 bar, in the range 0.1 to 40 bar, are provided through pressure correction graphs and EXCEL interpolator (Supplementary Material). The interpolated emissivities are within ± 2% margin from the line-by-line calculated values. The newly developed emissivity graphs are substantially more accurate than the existing Ulrich (1936) & Hottel (1954) and Abu-Romia & Tien (1966) charts.
Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dam, Wim van; Howard, Mark; Department of Physics, University of California, Santa Barbara, California 93106
2011-07-15
We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships withmore » known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.« less
Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs
NASA Astrophysics Data System (ADS)
van Dam, Wim; Howard, Mark
2011-07-01
We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.
Interpreting Unfamiliar Graphs: A Generative, Activity Theoretic Model
ERIC Educational Resources Information Center
Roth, Wolff-Michael; Lee, Yew Jin
2004-01-01
Research on graphing presents its results as if knowing and understanding were something stored in peoples' minds independent of the situation that they find themselves in. Thus, there are no models that situate interview responses to graphing tasks. How, then, we question, are the interview texts produced? How do respondents begin and end…
Advanced Cyber Attack Modeling Analysis and Visualization
2010-03-01
Graph Analysis Network Web Logs Netflow Data TCP Dump Data System Logs Detect Protect Security Management What-If Figure 8. TVA attack graphs for...Clustered Graphs,” in Proceedings of the Symposium on Graph Drawing, September 1996. [25] K. Lakkaraju, W. Yurcik, A. Lee, “NVisionIP: NetFlow
Survey of Approaches to Generate Realistic Synthetic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Powers, Sarah S
A graph is a flexible data structure that can represent relationships between entities. As with other data analysis tasks, the use of realistic graphs is critical to obtaining valid research results. Unfortunately, using the actual ("real-world") graphs for research and new algorithm development is difficult due to the presence of sensitive information in the data or due to the scale of data. This results in practitioners developing algorithms and systems that employ synthetic graphs instead of real-world graphs. Generating realistic synthetic graphs that provide reliable statistical confidence to algorithmic analysis and system evaluation involves addressing technical hurdles in a broadmore » set of areas. This report surveys the state of the art in approaches to generate realistic graphs that are derived from fitted graph models on real-world graphs.« less
Bakal, Gokhan; Talari, Preetham; Kakani, Elijah V; Kavuluru, Ramakanth
2018-06-01
Identifying new potential treatment options for medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Likewise, identifying different causal relations between biomedical entities is also critical to understand biomedical processes. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach. To build high accuracy supervised predictive models to predict previously unknown treatment and causative relations between biomedical entities based only on semantic graph pattern features extracted from biomedical knowledge graphs. We used 7000 treats and 2918 causes hand-curated relations from the UMLS Metathesaurus to train and test our models. Our graph pattern features are extracted from simple paths connecting biomedical entities in the SemMedDB graph (based on the well-known SemMedDB database made available by the U.S. National Library of Medicine). Using these graph patterns connecting biomedical entities as features of logistic regression and decision tree models, we computed mean performance measures (precision, recall, F-score) over 100 distinct 80-20% train-test splits of the datasets. For all experiments, we used a positive:negative class imbalance of 1:10 in the test set to model relatively more realistic scenarios. Our models predict treats and causes relations with high F-scores of 99% and 90% respectively. Logistic regression model coefficients also help us identify highly discriminative patterns that have an intuitive interpretation. We are also able to predict some new plausible relations based on false positives that our models scored highly based on our collaborations with two physician co-authors. Finally, our decision tree models are able to retrieve over 50% of treatment relations from a recently created external dataset. We employed semantic graph patterns connecting pairs of candidate biomedical entities in a knowledge graph as features to predict treatment/causative relations between them. We provide what we believe is the first evidence in direct prediction of biomedical relations based on graph features. Our work complements lexical pattern based approaches in that the graph patterns can be used as additional features for weakly supervised relation prediction. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Loucif, Hemza; Boubetra, Abdelhak; Akrouf, Samir
2016-10-01
This paper aims to describe a new simplistic model dedicated to gauge the online influence of Twitter users based on a mixture of structural and interactional features. The model is an additive mathematical formulation which involves two main parts. The first part serves to measure the influence of the Twitter user on just his neighbourhood covering his followers. However, the second part evaluates the potential influence of the Twitter user beyond the circle of his followers. Particularly, it measures the likelihood that the tweets of the Twitter user will spread further within the social graph through the retweeting process. The model is tested on a data set involving four kinds of real-world egocentric networks. The empirical results reveal that an active ordinary user is more prominent than a non-active celebrity one. A simple comparison is conducted between the proposed model and two existing simplistic approaches. The results show that our model generates the most realistic influence scores due to its dealing with both explicit (structural and interactional) and implicit features.
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
Resource utilization model for the algorithm to architecture mapping model
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Patel, Rakesh R.
1993-01-01
The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.
Safaei, Soroush; Blanco, Pablo J; Müller, Lucas O; Hellevik, Leif R; Hunter, Peter J
2018-01-01
We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data.
Bond graph modelling of multibody dynamics and its symbolic scheme
NASA Astrophysics Data System (ADS)
Kawase, Takehiko; Yoshimura, Hiroaki
A bond graph method of modeling multibody dynamics is demonstrated. Specifically, a symbolic generation scheme which fully utilizes the bond graph information is presented. It is also demonstrated that structural understanding and representation in bond graph theory is quite powerful for the modeling of such large scale systems, and that the nonenergic multiport of junction structure, which is a multiport expression of the system structure, plays an important role, as first suggested by Paynter. The principal part of the proposed symbolic scheme, that is, the elimination of excess variables, is done through tearing and interconnection in the sense of Kron using newly defined causal and causal coefficient arrays.
Groupies in multitype random graphs.
Shang, Yilun
2016-01-01
A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach
NASA Technical Reports Server (NTRS)
Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.
2006-01-01
A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagberg, Aric; Swart, Pieter; S Chult, Daniel
NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility mades NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distributionmore » and many more. NetworkX can read and write various graph formats for eash exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdoes-Renyi, Small World, and Barabasi-Albert models, are included. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks.« less
Corona graphs as a model of small-world networks
NASA Astrophysics Data System (ADS)
Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi
2015-11-01
We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
Measuring Graph Comprehension, Critique, and Construction in Science
ERIC Educational Resources Information Center
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-01-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed…
Distributed Sensing and Processing: A Graphical Model Approach
2005-11-30
that Ramanujan graph toplogies maximize the convergence rate of distributed detection consensus algorithms, improving over three orders of...small world type network designs. 14. SUBJECT TERMS Ramanujan graphs, sensor network topology, sensor network...that Ramanujan graphs, for which there are explicit algebraic constructions, have large eigenratios, converging much faster than structured graphs
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
Learning a Health Knowledge Graph from Electronic Medical Records.
Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David
2017-07-20
Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).
Collaborative mining of graph patterns from multiple sources
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Colonna-Romanoa, John
2016-05-01
Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.
A distributed query execution engine of big attributed graphs.
Batarfi, Omar; Elshawi, Radwa; Fayoumi, Ayman; Barnawi, Ahmed; Sakr, Sherif
2016-01-01
A graph is a popular data model that has become pervasively used for modeling structural relationships between objects. In practice, in many real-world graphs, the graph vertices and edges need to be associated with descriptive attributes. Such type of graphs are referred to as attributed graphs. G-SPARQL has been proposed as an expressive language, with a centralized execution engine, for querying attributed graphs. G-SPARQL supports various types of graph querying operations including reachability, pattern matching and shortest path where any G-SPARQL query may include value-based predicates on the descriptive information (attributes) of the graph edges/vertices in addition to the structural predicates. In general, a main limitation of centralized systems is that their vertical scalability is always restricted by the physical limits of computer systems. This article describes the design, implementation in addition to the performance evaluation of DG-SPARQL, a distributed, hybrid and adaptive parallel execution engine of G-SPARQL queries. In this engine, the topology of the graph is distributed over the main memory of the underlying nodes while the graph data are maintained in a relational store which is replicated on the disk of each of the underlying nodes. DG-SPARQL evaluates parts of the query plan via SQL queries which are pushed to the underlying relational stores while other parts of the query plan, as necessary, are evaluated via indexless memory-based graph traversal algorithms. Our experimental evaluation shows the efficiency and the scalability of DG-SPARQL on querying massive attributed graph datasets in addition to its ability to outperform the performance of Apache Giraph, a popular distributed graph processing system, by orders of magnitudes.
Retina verification system based on biometric graph matching.
Lajevardi, Seyed Mehdi; Arakala, Arathi; Davis, Stephen A; Horadam, Kathy J
2013-09-01
This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude.
A model of language inflection graphs
NASA Astrophysics Data System (ADS)
Fukś, Henryk; Farzad, Babak; Cao, Yi
2014-01-01
Inflection graphs are highly complex networks representing relationships between inflectional forms of words in human languages. For so-called synthetic languages, such as Latin or Polish, they have particularly interesting structure due to the abundance of inflectional forms. We construct the simplest form of inflection graphs, namely a bipartite graph in which one group of vertices corresponds to dictionary headwords and the other group to inflected forms encountered in a given text. We, then, study projection of this graph on the set of headwords. The projection decomposes into a large number of connected components, to be called word groups. Distribution of sizes of word group exhibits some remarkable properties, resembling cluster distribution in a lattice percolation near the critical point. We propose a simple model which produces graphs of this type, reproducing the desired component distribution and other topological features.
Model-based morphological segmentation and labeling of coronary angiograms.
Haris, K; Efstratiadis, S N; Maglaveras, N; Pappas, C; Gourassas, J; Louridas, G
1999-10-01
A method for extraction and labeling of the coronary arterial tree (CAT) using minimal user supervision in single-view angiograms is proposed. The CAT structural description (skeleton and borders) is produced, along with quantitative information for the artery dimensions and assignment of coded labels, based on a given coronary artery model represented by a graph. The stages of the method are: 1) CAT tracking and detection; 2) artery skeleton and border estimation; 3) feature graph creation; and iv) artery labeling by graph matching. The approximate CAT centerline and borders are extracted by recursive tracking based on circular template analysis. The accurate skeleton and borders of each CAT segment are computed, based on morphological homotopy modification and watershed transform. The approximate centerline and borders are used for constructing the artery segment enclosing area (ASEA), where the defined skeleton and border curves are considered as markers. Using the marked ASEA, an artery gradient image is constructed where all the ASEA pixels (except the skeleton ones) are assigned the gradient magnitude of the original image. The artery gradient image markers are imposed as its unique regional minima by the homotopy modification method, the watershed transform is used for extracting the artery segment borders, and the feature graph is updated. Finally, given the created feature graph and the known model graph, a graph matching algorithm assigns the appropriate labels to the extracted CAT using weighted maximal cliques on the association graph corresponding to the two given graphs. Experimental results using clinical digitized coronary angiograms are presented.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting
2017-04-01
Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.
Temporal Representation in Semantic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levandoski, J J; Abdulla, G M
2007-08-07
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.
Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard
2014-01-01
To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.
Pogliani, Lionello
2010-01-30
Twelve properties of a highly heterogeneous class of organic solvents have been modeled with a graph-theoretical molecular connectivity modified (MC) method, which allows to encode the core electrons and the hydrogen atoms. The graph-theoretical method uses the concepts of simple, general, and complete graphs, where these last types of graphs are used to encode the core electrons. The hydrogen atoms have been encoded by the aid of a graph-theoretical perturbation parameter, which contributes to the definition of the valence delta, delta(v), a key parameter in molecular connectivity studies. The model of the twelve properties done with a stepwise search algorithm is always satisfactory, and it allows to check the influence of the hydrogen content of the solvent molecules on the choice of the type of descriptor. A similar argument holds for the influence of the halogen atoms on the type of core electron representation. In some cases the molar mass, and in a minor way, special "ad hoc" parameters have been used to improve the model. A very good model of the surface tension could be obtained by the aid of five experimental parameters. A mixed model method based on experimental parameters plus molecular connectivity indices achieved, instead, to consistently improve the model quality of five properties. To underline is the importance of the boiling point temperatures as descriptors in these last two model methodologies. Copyright 2009 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.
2006-01-01
The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeure, I.M.
The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Two classes of bipartite networks: nested biological and social systems.
Burgos, Enrique; Ceva, Horacio; Hernández, Laura; Perazzo, R P J; Devoto, Mariano; Medan, Diego
2008-10-01
Bipartite graphs have received some attention in the study of social networks and of biological mutualistic systems. A generalization of a previous model is presented, that evolves the topology of the graph in order to optimally account for a given contact preference rule between the two guilds of the network. As a result, social and biological graphs are classified as belonging to two clearly different classes. Projected graphs, linking the agents of only one guild, are obtained from the original bipartite graph. The corresponding evolution of its statistical properties is also studied. An example of a biological mutualistic network is analyzed in detail, and it is found that the model provides a very good fitting of all the main statistical features. The model also provides a proper qualitative description of the same features observed in social webs, suggesting the possible reasons underlying the difference in the organization of these two kinds of bipartite networks.
Graph-Theoretic Analysis of Monomethyl Phosphate Clustering in Ionic Solutions.
Han, Kyungreem; Venable, Richard M; Bryant, Anne-Marie; Legacy, Christopher J; Shen, Rong; Li, Hui; Roux, Benoît; Gericke, Arne; Pastor, Richard W
2018-02-01
All-atom molecular dynamics simulations combined with graph-theoretic analysis reveal that clustering of monomethyl phosphate dianion (MMP 2- ) is strongly influenced by the types and combinations of cations in the aqueous solution. Although Ca 2+ promotes the formation of stable and large MMP 2- clusters, K + alone does not. Nonetheless, clusters are larger and their link lifetimes are longer in mixtures of K + and Ca 2+ . This "synergistic" effect depends sensitively on the Lennard-Jones interaction parameters between Ca 2+ and the phosphorus oxygen and correlates with the hydration of the clusters. The pronounced MMP 2- clustering effect of Ca 2+ in the presence of K + is confirmed by Fourier transform infrared spectroscopy. The characterization of the cation-dependent clustering of MMP 2- provides a starting point for understanding cation-dependent clustering of phosphoinositides in cell membranes.
A Wave Chaotic Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan
Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Granda, Jose J.
2003-01-01
Conceptually, modeling of flexible, multi-body systems involves a formulation as a set of time-dependent partial differential equations. However, for practical, engineering purposes, this modeling is usually done using the method of Finite Elements, which approximates the set of partial differential equations, thus generalizing the approach to all continuous media. This research investigates the links between the Bond Graph method and the classical methods used to develop system models and advocates the Bond Graph Methodology and current bond graph tools as alternate approaches that will lead to a quick and precise understanding of a flexible multi-body system under automatic control. For long endurance, complex spacecraft, because of articulation and mission evolution the model of the physical system may change frequently. So a method of automatic generation and regeneration of system models that does not lead to implicit equations, as does the Lagrange equation approach, is desirable. The bond graph method has been shown to be amenable to automatic generation of equations with appropriate consideration of causality. Indeed human-interactive software now exists that automatically generates both symbolic and numeric system models and evaluates causality as the user develops the model, e.g. the CAMP-G software package. In this paper the CAMP-G package is used to generate a bond graph model of the International Space Station (ISS) at an early stage in its assembly, Zvezda. The ISS is an ideal example because it is a collection of bodies that are articulated, many of which are highly flexible. Also many reaction jets are used to control translation and attitude, and many electric motors are used to articulate appendages, which consist of photovoltaic arrays and composite assemblies. The Zvezda bond graph model is compared to an existing model, which was generated by the NASA Johnson Space Center during the Verification and Analysis Cycle of Zvezda.
[Use of the Six Sigma methodology for the preparation of parenteral nutrition mixtures].
Silgado Bernal, M F; Basto Benítez, I; Ramírez García, G
2014-04-01
To use the tools of the Six Sigma methodology for the statistical control in the elaboration of parenteral nutrition mixtures at the critical checkpoint of specific density. Between August of 2010 and September of 2013, specific density analysis was performed to 100% of the samples, and the data were divided in two groups, adults and neonates. The percentage of acceptance, the trend graphs, and the sigma level were determined. A normality analysis was carried out by using the Shapiro Wilk test and the total percentage of mixtures within the specification limits was calculated. The specific density data between August of 2010 and September of 2013 comply with the normality test (W = 0.94) and show improvement in sigma level through time, reaching 6/6 in adults and 3.8/6 in neonates. 100% of the mixtures comply with the specification limits for adults and neonates, always within the control limits during the process. The improvement plans together with the Six Sigma methodology allow controlling the process, and warrant the agreement between the medical prescription and the content of the mixture. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
NASA Astrophysics Data System (ADS)
Meshalkina, J. L.; Yaroslavtsev, A. M.; Vasenev, I. I.; Andreeva, I. V.; Tihonova, M. V.
2018-01-01
The carbon balance for the agroecosystems with potato plants and oats & vetch mixture on sod-podzolics soils was evaluated using the eddy covariance approach. Absorption of carbon was recorded only during the growing season; maximum values were detected for all crops in July. The number of days during the vegetation period, when the carbon stocked in the fields with potatoes and oats & vetch mixture was about the same and accounted for 53-55 days. During this period, the increase in gross primary production (GPP) is well correlated with the crop yields. The curve of the gross primary productivity is closely linked to the phases of development of plants; for potatoes, this graph differs significantly for all phases. Form of oats & vetch mixture biomass curve shown linear increases. Carbon losses were observed for all the studied agroecosystems: for fields with an oats & vetch mixture they were 254 g C m-2 y-1, while for fields with potato plants they were 307 g C m-2 y-1. Values about 250-300 g C m-2 per year may be considered as estimated values for the total carbon uptake for agroecosystems with potato plants and oats & vetch mixture on sod-podzolic soils.
NASA Astrophysics Data System (ADS)
Kurmyshev, Evguenii; Juárez, Héctor A.; González-Silva, Ricardo A.
2011-08-01
Bounded confidence models of opinion dynamics in social networks have been actively studied in recent years, in particular, opinion formation and extremism propagation along with other aspects of social dynamics. In this work, after an analysis of limitations of the Deffuant-Weisbuch (DW) bounded confidence, relative agreement model, we propose the mixed model that takes into account two psychological types of individuals. Concord agents (C-agents) are friendly people; they interact in a way that their opinions always get closer. Agents of the other psychological type show partial antagonism in their interaction (PA-agents). Opinion dynamics in heterogeneous social groups, consisting of agents of the two types, was studied on different social networks: Erdös-Rényi random graphs, small-world networks and complete graphs. Limit cases of the mixed model, pure C- and PA-societies, were also studied. We found that group opinion formation is, qualitatively, almost independent of the topology of networks used in this work. Opinion fragmentation, polarization and consensus are observed in the mixed model at different proportions of PA- and C-agents, depending on the value of initial opinion tolerance of agents. As for the opinion formation and arising of “dissidents”, the opinion dynamics of the C-agents society was found to be similar to that of the DW model, except for the rate of opinion convergence. Nevertheless, mixed societies showed dynamics and bifurcation patterns notably different to those of the DW model. The influence of biased initial conditions over opinion formation in heterogeneous social groups was also studied versus the initial value of opinion uncertainty, varying the proportion of the PA- to C-agents. Bifurcation diagrams showed an impressive evolution of collective opinion, in particular, radical changes of left to right consensus or vice versa at an opinion uncertainty value equal to 0.7 in the model with the PA/C mixture of population near 50/50.
Graphing trillions of triangles.
Burkhardt, Paul
2017-07-01
The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed.
GraphCrunch 2: Software tool for network modeling, alignment and clustering.
Kuchaiev, Oleksii; Stevanović, Aleksandar; Hayes, Wayne; Pržulj, Nataša
2011-01-19
Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, GraphCruch 2 implements an algorithm for clustering nodes within a network based solely on their topological similarities. Using GraphCrunch 2, we demonstrate that eukaryotic and viral PPI networks may belong to different graph model families and show that topology-based clustering can reveal important functional similarities between proteins within yeast and human PPI networks. GraphCrunch 2 is a software tool that implements the latest research on biological network analysis. It parallelizes computationally intensive tasks to fully utilize the potential of modern multi-core CPUs. It is open-source and freely available for research use. It runs under the Windows and Linux platforms.
Random graph models of social networks.
Newman, M E J; Watts, D J; Strogatz, S H
2002-02-19
We describe some new exactly solvable models of the structure of social networks, based on random graphs with arbitrary degree distributions. We give models both for simple unipartite networks, such as acquaintance networks, and bipartite networks, such as affiliation networks. We compare the predictions of our models to data for a number of real-world social networks and find that in some cases, the models are in remarkable agreement with the data, whereas in others the agreement is poorer, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.
Bond Graph Model of Cerebral Circulation: Toward Clinically Feasible Systemic Blood Flow Simulations
Safaei, Soroush; Blanco, Pablo J.; Müller, Lucas O.; Hellevik, Leif R.; Hunter, Peter J.
2018-01-01
We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data. PMID:29551979
Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids.
José, Marco V; Morgado, Eberto R; Guimarães, Romeu Cardoso; Zamudio, Gabriel S; de Farías, Sávio Torres; Bobadilla, Juan R; Sosa, Daniela
2014-08-11
Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state.
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
Graph processing platforms at scale: practices and experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C
2015-01-01
Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less
X-Ray Attenuation and Absorption for Materials of Dosimetric Interest
National Institute of Standards and Technology Data Gateway
SRD 126 X-Ray Attenuation and Absorption for Materials of Dosimetric Interest (Web, free access) Tables and graphs of the photon mass attenuation coefficient and the mass energy-absorption coefficient are presented for all of the elements Z = 1 to 92, and for 48 compounds and mixtures of radiological interest. The tables cover energies of the photon (x-ray, gamma ray, bremsstrahlung) from 1 keV to 20 MeV.
Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests
Li, Yilei; Zhu, Zhencai; Chen, Guoan
2014-01-01
The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS) for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests. PMID:24967428
Using graph theory to quantify coarse sediment connectivity in alpine geosystems
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Thiel, Markus; Schwanghart, Wolfgang; Haas, Florian; Becht, Michael
2010-05-01
Networks are a common object of study in various disciplines. Among others, informatics, sociology, transportation science, economics and ecology frequently deal with objects which are linked with other objects to form a network. Despite this wide thematic range, a coherent formal basis to represent, measure and model the relational structure of models exists. The mathematical model for networks of all kinds is a graph which can be analysed using the tools of mathematical graph theory. In a graph model of a generic system, system components are represented by graph nodes, and the linkages between them are formed by graph edges. The latter may represent all kinds of linkages, from matter or energy fluxes to functional relations. To some extent, graph theory has been used in geosciences and related disciplines; in hydrology and fluvial geomorphology, for example, river networks have been modeled and analysed as graphs. An important issue in hydrology is the hydrological connectivity which determines if runoff generated on some area reaches the channel network. In ecology, a number of graph-theoretical indices is applicable to describing the influence of habitat distribution and landscape fragmentation on population structure and species mobility. In these examples, the mobility of matter (water, sediment, animals) through a system is an important consequence of system structure, i.e. the location and topology of its components as well as of properties of linkages between them. In geomorphology, sediment connectivity relates to the potential of sediment particles to move through the catchment. As a system property, connectivity depends, for example, on the degree to which hillslopes within a catchment are coupled to the channel system (lateral coupling), and to which channel reaches are coupled to each other (longitudinal coupling). In the present study, numerical GIS-based models are used to investigate the coupling of geomorphic process units by delineating the process domains of important geomorphic processes in a high-mountain environment (rockfall, slope-type debris flows, slope aquatic and fluvial processes). The results are validated by field mapping; they show that only small parts of a catchment are actually coupled to its outlet with respect to coarse (bedload) sediment. The models not only generate maps of the spatial extent and geomorphic activity of the aforementioned processes, they also output so-called edge lists that can be converted to adjacency matrices and graphs. Graph theory is then employed to explore ‘local' (i.e. referring to single nodes or edges) and ‘global' (i.e. system-wide, referring to the whole graph) measures that can be used to quantify coarse sediment connectivity. Such a quantification will complement the mainly qualitative appraisal of coupling and connectivity; the effect of connectivity on catchment properties such as specific sediment yield and catchment sensitivity will then be studied on the basis of quantitative measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, Jacob G.
2013-01-11
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH{sub 4}H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components.« less
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines
Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram
2014-01-01
When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
Ivanciuc, O; Ivanciuc, T; Klein, D J; Seitz, W A; Balaban, A T
2001-02-01
Quantitative structure-retention relationships (QSRR) represent statistical models that quantify the connection between the molecular structure and the chromatographic retention indices of organic compounds, allowing the prediction of retention indices of novel, not yet synthesized compounds, solely from their structural descriptors. Using multiple linear regression, QSRR models for the gas chromatographic Kováts retention indices of 129 alkylbenzenes are generated using molecular graph descriptors. The correlational ability of structural descriptors computed from 10 molecular matrices is investigated, showing that the novel reciprocal matrices give numerical indices with improved correlational ability. A QSRR equation with 5 graph descriptors gives the best calibration and prediction results, demonstrating the usefulness of the molecular graph descriptors in modeling chromatographic retention parameters. The sequential orthogonalization of descriptors suggests simpler QSRR models by eliminating redundant structural information.
Detecting labor using graph theory on connectivity matrices of uterine EMG.
Al-Omar, S; Diab, A; Nader, N; Khalil, M; Karlsson, B; Marque, C
2015-08-01
Premature labor is one of the most serious health problems in the developed world. One of the main reasons for this is that no good way exists to distinguish true labor from normal pregnancy contractions. The aim of this paper is to investigate if the application of graph theory techniques to multi-electrode uterine EMG signals can improve the discrimination between pregnancy contractions and labor. To test our methods we first applied them to synthetic graphs where we detected some differences in the parameters results and changes in the graph model from pregnancy-like graphs to labor-like graphs. Then, we applied the same methods to real signals. We obtained the best differentiation between pregnancy and labor through the same parameters. Major improvements in differentiating between pregnancy and labor were obtained using a low pass windowing preprocessing step. Results show that real graphs generally became more organized when moving from pregnancy, where the graph showed random characteristics, to labor where the graph became a more small-world like graph.
Benchmarking Measures of Network Controllability on Canonical Graph Models
NASA Astrophysics Data System (ADS)
Wu-Yan, Elena; Betzel, Richard F.; Tang, Evelyn; Gu, Shi; Pasqualetti, Fabio; Bassett, Danielle S.
2018-03-01
The control of networked dynamical systems opens the possibility for new discoveries and therapies in systems biology and neuroscience. Recent theoretical advances provide candidate mechanisms by which a system can be driven from one pre-specified state to another, and computational approaches provide tools to test those mechanisms in real-world systems. Despite already having been applied to study network systems in biology and neuroscience, the practical performance of these tools and associated measures on simple networks with pre-specified structure has yet to be assessed. Here, we study the behavior of four control metrics (global, average, modal, and boundary controllability) on eight canonical graphs (including Erdős-Rényi, regular, small-world, random geometric, Barábasi-Albert preferential attachment, and several modular networks) with different edge weighting schemes (Gaussian, power-law, and two nonparametric distributions from brain networks, as examples of real-world systems). We observe that differences in global controllability across graph models are more salient when edge weight distributions are heavy-tailed as opposed to normal. In contrast, differences in average, modal, and boundary controllability across graph models (as well as across nodes in the graph) are more salient when edge weight distributions are less heavy-tailed. Across graph models and edge weighting schemes, average and modal controllability are negatively correlated with one another across nodes; yet, across graph instances, the relation between average and modal controllability can be positive, negative, or nonsignificant. Collectively, these findings demonstrate that controllability statistics (and their relations) differ across graphs with different topologies and that these differences can be muted or accentuated by differences in the edge weight distributions. More generally, our numerical studies motivate future analytical efforts to better understand the mathematical underpinnings of the relationship between graph topology and control, as well as efforts to design networks with specific control profiles.
Knowledge-based understanding of aerial surveillance video
NASA Astrophysics Data System (ADS)
Cheng, Hui; Butler, Darren
2006-05-01
Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.
2014-01-01
Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell-based models into graphs for comparison with data stored in an external data repository, has made feasible the automated development, training, and validation of computational models using morphology-based data. This work is part of an ongoing project to automate the search process, which will greatly expand our ability to identify, consider, and test biological mechanisms in the field of regenerative biology. PMID:24917489
Bond Graph Modeling of Chemiosmotic Biomolecular Energy Transduction.
Gawthrop, Peter J
2017-04-01
Engineering systems modeling and analysis based on the bond graph approach has been applied to biomolecular systems. In this context, the notion of a Faraday-equivalent chemical potential is introduced which allows chemical potential to be expressed in an analogous manner to electrical volts thus allowing engineering intuition to be applied to biomolecular systems. Redox reactions, and their representation by half-reactions, are key components of biological systems which involve both electrical and chemical domains. A bond graph interpretation of redox reactions is given which combines bond graphs with the Faraday-equivalent chemical potential. This approach is particularly relevant when the biomolecular system implements chemoelectrical transduction - for example chemiosmosis within the key metabolic pathway of mitochondria: oxidative phosphorylation. An alternative way of implementing computational modularity using bond graphs is introduced and used to give a physically based model of the mitochondrial electron transport chain To illustrate the overall approach, this model is analyzed using the Faraday-equivalent chemical potential approach and engineering intuition is used to guide affinity equalisation: a energy based analysis of the mitochondrial electron transport chain.
Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs
NASA Astrophysics Data System (ADS)
Hor, A.-H.; Jadidi, A.; Sohn, G.
2016-06-01
In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.
The Vertex Version of Weighted Wiener Number for Bicyclic Molecular Structures
Gao, Wei
2015-01-01
Graphs are used to model chemical compounds and drugs. In the graphs, each vertex represents an atom of molecule and edges between the corresponding vertices are used to represent covalent bounds between atoms. We call such a graph, which is derived from a chemical compound, a molecular graph. Evidence shows that the vertex-weighted Wiener number, which is defined over this molecular graph, is strongly correlated to both the melting point and boiling point of the compounds. In this paper, we report the extremal vertex-weighted Wiener number of bicyclic molecular graph in terms of molecular structural analysis and graph transformations. The promising prospects of the application for the chemical and pharmacy engineering are illustrated by theoretical results achieved in this paper. PMID:26640513
Time-dependence of graph theory metrics in functional connectivity analysis
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J.; Haneef, Zulfi; Stern, John M.
2016-01-01
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. PMID:26518632
Time-dependence of graph theory metrics in functional connectivity analysis.
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J; Haneef, Zulfi; Stern, John M
2016-01-15
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. Copyright © 2015 Elsevier Inc. All rights reserved.
The many faces of graph dynamics
NASA Astrophysics Data System (ADS)
Pignolet, Yvonne Anne; Roy, Matthieu; Schmid, Stefan; Tredan, Gilles
2017-06-01
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a ‘one fits it all’ model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
Dynamical modeling and analysis of large cellular regulatory networks
NASA Astrophysics Data System (ADS)
Bérenguier, D.; Chaouiya, C.; Monteiro, P. T.; Naldi, A.; Remy, E.; Thieffry, D.; Tichit, L.
2013-06-01
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Graphing trillions of triangles
Burkhardt, Paul
2016-01-01
The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed. PMID:28690426
An In-Depth Analysis of the Chung-Lu Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, M.; DeSterck, H.; Sanders, G.
2015-10-28
In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less
Artificial Neural Networks for Processing Graphs with Application to Image Understanding: A Survey
NASA Astrophysics Data System (ADS)
Bianchini, Monica; Scarselli, Franco
In graphical pattern recognition, each data is represented as an arrangement of elements, that encodes both the properties of each element and the relations among them. Hence, patterns are modelled as labelled graphs where, in general, labels can be attached to both nodes and edges. Artificial neural networks able to process graphs are a powerful tool for addressing a great variety of real-world problems, where the information is naturally organized in entities and relationships among entities and, in fact, they have been widely used in computer vision, f.i. in logo recognition, in similarity retrieval, and for object detection. In this chapter, we propose a survey of neural network models able to process structured information, with a particular focus on those architectures tailored to address image understanding applications. Starting from the original recursive model (RNNs), we subsequently present different ways to represent images - by trees, forests of trees, multiresolution trees, directed acyclic graphs with labelled edges, general graphs - and, correspondingly, neural network architectures appropriate to process such structures.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
Representation of activity in images using geospatial temporal graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph; McLendon, III, William C.; Parekh, Ojas D.
Various technologies pertaining to modeling patterns of activity observed in remote sensing images using geospatial-temporal graphs are described herein. Graphs are constructed by representing objects in remote sensing images as nodes, and connecting nodes with undirected edges representing either distance or adjacency relationships between objects and directed edges representing changes in time. Activity patterns may be discerned from the graphs by coding nodes representing persistent objects like buildings differently from nodes representing ephemeral objects like vehicles, and examining the geospatial-temporal relationships of ephemeral nodes within the graph.
Analysis Tools for Interconnected Boolean Networks With Biological Applications.
Chaves, Madalena; Tournier, Laurent
2018-01-01
Boolean networks with asynchronous updates are a class of logical models particularly well adapted to describe the dynamics of biological networks with uncertain measures. The state space of these models can be described by an asynchronous state transition graph, which represents all the possible exits from every single state, and gives a global image of all the possible trajectories of the system. In addition, the asynchronous state transition graph can be associated with an absorbing Markov chain, further providing a semi-quantitative framework where it becomes possible to compute probabilities for the different trajectories. For large networks, however, such direct analyses become computationally untractable, given the exponential dimension of the graph. Exploiting the general modularity of biological systems, we have introduced the novel concept of asymptotic graph , computed as an interconnection of several asynchronous transition graphs and recovering all asymptotic behaviors of a large interconnected system from the behavior of its smaller modules. From a modeling point of view, the interconnection of networks is very useful to address for instance the interplay between known biological modules and to test different hypotheses on the nature of their mutual regulatory links. This paper develops two new features of this general methodology: a quantitative dimension is added to the asymptotic graph, through the computation of relative probabilities for each final attractor and a companion cross-graph is introduced to complement the method on a theoretical point of view.
Three-Dimensional Algebraic Models of the tRNA Code and 12 Graphs for Representing the Amino Acids
José, Marco V.; Morgado, Eberto R.; Guimarães, Romeu Cardoso; Zamudio, Gabriel S.; de Farías, Sávio Torres; Bobadilla, Juan R.; Sosa, Daniela
2014-01-01
Three-dimensional algebraic models, also called Genetic Hotels, are developed to represent the Standard Genetic Code, the Standard tRNA Code (S-tRNA-C), and the Human tRNA code (H-tRNA-C). New algebraic concepts are introduced to be able to describe these models, to wit, the generalization of the 2n-Klein Group and the concept of a subgroup coset with a tail. We found that the H-tRNA-C displayed broken symmetries in regard to the S-tRNA-C, which is highly symmetric. We also show that there are only 12 ways to represent each of the corresponding phenotypic graphs of amino acids. The averages of statistical centrality measures of the 12 graphs for each of the three codes are carried out and they are statistically compared. The phenotypic graphs of the S-tRNA-C display a common triangular prism of amino acids in 10 out of the 12 graphs, whilst the corresponding graphs for the H-tRNA-C display only two triangular prisms. The graphs exhibit disjoint clusters of amino acids when their polar requirement values are used. We contend that the S-tRNA-C is in a frozen-like state, whereas the H-tRNA-C may be in an evolving state. PMID:25370377
NASA Astrophysics Data System (ADS)
Awasarmol, V. V.; Gaikwad, D. K.; Raut, S. D.; Pawar, P. P.
The mass attenuation coefficients (μm) for organic nonlinear optical materials measured at 122-1330 keV photon energies were investigated on the basis of mixture rule and compared with obtained values of WinXCOM program. It is observed that there is a good agreement between theoretical and experimental values of the samples. All samples were irradiated with six radioactive sources such as 57Co, 133Ba, 22Na, 137Cs, 54Mn and 60Co using transmission arrangement. Effective atomic and electron numbers or electron densities (Zeff and Neff), molar extinction coefficient (ε), mass energy absorption coefficient (μen/ρ) and effective atomic energy absorption cross section (σa,en) were determined experimentally and theoretically using the obtained μm values for investigated samples and graphs have been plotted. The graph shows that the variation of all samples decreases with increasing photon energy.
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Phase transitions in Ising models on directed networks
NASA Astrophysics Data System (ADS)
Lipowski, Adam; Ferreira, António Luis; Lipowska, Dorota; Gontarek, Krzysztof
2015-11-01
We examine Ising models with heat-bath dynamics on directed networks. Our simulations show that Ising models on directed triangular and simple cubic lattices undergo a phase transition that most likely belongs to the Ising universality class. On the directed square lattice the model remains paramagnetic at any positive temperature as already reported in some previous studies. We also examine random directed graphs and show that contrary to undirected ones, percolation of directed bonds does not guarantee ferromagnetic ordering. Only above a certain threshold can a random directed graph support finite-temperature ferromagnetic ordering. Such behavior is found also for out-homogeneous random graphs, but in this case the analysis of magnetic and percolative properties can be done exactly. Directed random graphs also differ from undirected ones with respect to zero-temperature freezing. Only at low connectivity do they remain trapped in a disordered configuration. Above a certain threshold, however, the zero-temperature dynamics quickly drives the model toward a broken symmetry (magnetized) state. Only above this threshold, which is almost twice as large as the percolation threshold, do we expect the Ising model to have a positive critical temperature. With a very good accuracy, the behavior on directed random graphs is reproduced within a certain approximate scheme.
Modeling flow and transport in fracture networks using graphs
NASA Astrophysics Data System (ADS)
Karra, S.; O'Malley, D.; Hyman, J. D.; Viswanathan, H. S.; Srinivasan, G.
2018-03-01
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O (104) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.
Modeling flow and transport in fracture networks using graphs.
Karra, S; O'Malley, D; Hyman, J D; Viswanathan, H S; Srinivasan, G
2018-03-01
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O(10^{4}) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.
Modeling flow and transport in fracture networks using graphs
Karra, S.; O'Malley, D.; Hyman, J. D.; ...
2018-03-09
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less
Modeling flow and transport in fracture networks using graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karra, S.; O'Malley, D.; Hyman, J. D.
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less
On the generation and evolution of internal gravity waves
NASA Technical Reports Server (NTRS)
Lansing, F. S.; Maxworthy, T.
1984-01-01
The tidal generation and evolution of internal gravity waves is investigated experimentally and theoretically using a two-dimensional two-layer model. Time-dependent flow is created by moving a profile of maximum submerged depth 7.7 cm through a total stroke of 29 cm in water above a freon-kerosene mixture in an 8.6-m-long 30-cm-deep 20-cm-wide transparent channel, and the deformation of the fluid interface is recorded photographically. A theoretical model of the interface as a set of discrete vortices is constructed numerically; the rigid structures are represented by a source distribution; governing equations in Lagrangian form are obtained; and two integrodifferential equations relating baroclinic vorticity generation and source-density generation are derived. The experimental and computed results are shown in photographs and graphs, respectively, and found to be in good agreement at small Froude numbers. The reasons for small discrepancies in the position of the maximum interface displacement at large Froude numbers are examined.
Proximity Networks and Epidemics
NASA Astrophysics Data System (ADS)
Guclu, Hasan; Toroczkai, Zoltán
2007-03-01
We presented the basis of a framework to account for the dynamics of contacts in epidemic processes, through the notion of dynamic proximity graphs. By varying the integration time-parameter T, which is the period of infectivity one can give a simple account for some of the differences in the observed contact networks for different diseases, such as smallpox, or AIDS. Our simplistic model also seems to shed some light on the shape of the degree distribution of the measured people-people contact network from the EPISIM data. We certainly do not claim that the simplistic graph integration model above is a good model for dynamic contact graphs. It only contains the essential ingredients for such processes to produce a qualitative agreement with some observations. We expect that further refinements and extensions to this picture, in particular deriving the link-probabilities in the dynamic proximity graph from more realistic contact dynamics should improve the agreement between models and data.
An Interactive Teaching System for Bond Graph Modeling and Simulation in Bioengineering
ERIC Educational Resources Information Center
Roman, Monica; Popescu, Dorin; Selisteanu, Dan
2013-01-01
The objective of the present work was to implement a teaching system useful in modeling and simulation of biotechnological processes. The interactive system is based on applications developed using 20-sim modeling and simulation software environment. A procedure for the simulation of bioprocesses modeled by bond graphs is proposed and simulators…
Supplantation of Mental Operations on Graphs
ERIC Educational Resources Information Center
Vogel, Markus; Girwidz, Raimund; Engel, Joachim
2007-01-01
Research findings show the difficulties younger students have in working with graphs. Higher mental operations are necessary for a skilled interpretation of abstract representations. We suggest connecting a concrete representation of the modeled problem with the related graph. The idea is to illustrate essential mental operations externally. This…
Hierarchical graphs for better annotations of rule-based models of biochemical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Bin; Hlavacek, William
2009-01-01
In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of amore » molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.« less
Traffic Behavior Recognition Using the Pachinko Allocation Model
Huynh-The, Thien; Banos, Oresti; Le, Ba-Vui; Bui, Dinh-Mao; Yoon, Yongik; Lee, Sungyoung
2015-01-01
CCTV-based behavior recognition systems have gained considerable attention in recent years in the transportation surveillance domain for identifying unusual patterns, such as traffic jams, accidents, dangerous driving and other abnormal behaviors. In this paper, a novel approach for traffic behavior modeling is presented for video-based road surveillance. The proposed system combines the pachinko allocation model (PAM) and support vector machine (SVM) for a hierarchical representation and identification of traffic behavior. A background subtraction technique using Gaussian mixture models (GMMs) and an object tracking mechanism based on Kalman filters are utilized to firstly construct the object trajectories. Then, the sparse features comprising the locations and directions of the moving objects are modeled by PAM into traffic topics, namely activities and behaviors. As a key innovation, PAM captures not only the correlation among the activities, but also among the behaviors based on the arbitrary directed acyclic graph (DAG). The SVM classifier is then utilized on top to train and recognize the traffic activity and behavior. The proposed model shows more flexibility and greater expressive power than the commonly-used latent Dirichlet allocation (LDA) approach, leading to a higher recognition accuracy in the behavior classification. PMID:26151213
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Dynamics of Nearest-Neighbour Competitions on Graphs
NASA Astrophysics Data System (ADS)
Rador, Tonguç
2017-10-01
Considering a collection of agents representing the vertices of a graph endowed with integer points, we study the asymptotic dynamics of the rate of the increase of their points according to a very simple rule: we randomly pick an an edge from the graph which unambiguously defines two agents we give a point the the agent with larger point with probability p and to the lagger with probability q such that p+q=1. The model we present is the most general version of the nearest-neighbour competition model introduced by Ben-Naim, Vazquez and Redner. We show that the model combines aspects of hyperbolic partial differential equations—as that of a conservation law—graph colouring and hyperplane arrangements. We discuss the properties of the model for general graphs but we confine in depth study to d-dimensional tori. We present a detailed study for the ring graph, which includes a chemical potential approximation to calculate all its statistics that gives rather accurate results. The two-dimensional torus, not studied in depth as the ring, is shown to possess critical behaviour in that the asymptotic speeds arrange themselves in two-coloured islands separated by borders of three other colours and the size of the islands obey power law distribution. We also show that in the large d limit the d-dimensional torus shows inverse sine law for the distribution of asymptotic speeds.
A nonlinear q-voter model with deadlocks on the Watts-Strogatz graph
NASA Astrophysics Data System (ADS)
Sznajd-Weron, Katarzyna; Michal Suszczynski, Karol
2014-07-01
We study the nonlinear $q$-voter model with deadlocks on a Watts-Strogats graph. Using Monte Carlo simulations, we obtain so called exit probability and exit time. We determine how network properties, such as randomness or density of links influence exit properties of a model.
Anderson localization for radial tree-like random quantum graphs
NASA Astrophysics Data System (ADS)
Hislop, Peter D.; Post, Olaf
We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.
Connections between the Sznajd model with general confidence rules and graph theory
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2012-10-01
The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).
Computing Information Value from RDF Graph Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Heileman, Gregory
2010-11-08
Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We computemore » information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.« less
Sekhar, P Nataraj; Amrutha, R Naga; Sangam, Shubhada; Verma, D P S; Kishor, P B Kavi
2007-11-01
Ornithine delta-aminotransferase (OAT) is an important enzyme in proline biosynthetic pathway and is implicated in salt tolerance in higher plants. OAT transaminates ornithine to pyrroline 5-carboxylate, which is further catalyzed to proline by pyrroline 5-carboxylate reductase. The Vigna aconitifolia OAT cDNA, encoding a polypeptide of 48.1 kDa, was expressed in Escherichia coli and the enzyme was partially characterized following its purification using (NH(4))(2)SO(4) precipitation and gel filtration techniques. Optimal activity of the enzyme was observed at a temperature of 25 degrees C and pH 8.0. The enzyme appeared to be a monomer and exhibited high activity at 4mM ornithine. Proline did not show any apparent effect but isoleucine, valine and serine inhibited the activity when added into the assay mixture along with ornithine. Omission of pyridoxal 5'-phosphate from the reaction mixture reduced the activity of this enzyme by 60%. To further evaluate these biochemical observations, homology modeling of the OAT was performed based on the crystal structure of the ornithine delta-aminotransferase from humans (PDB code 1OAT) by using the software MODELLER6v2. With the aid of the molecular mechanics and dynamics methods, the final model was obtained and assessed subsequently by PROCHECK and VERIFY-3D graph. With this model, a flexible docking study with the substrate and inhibitors was performed and the results indicated that Gly106 and Lys256 in OAT are the important determinant residues in binding as they have strong hydrogen bonding contacts with the substrate and inhibitors. These observations are in conformity with the results obtained from experimental investigations.
Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.
ERIC Educational Resources Information Center
Alston, Richard M.; Chi, Wan Fu
1989-01-01
Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…
Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xuanhua; Luo, Xuan; Liang, Junling
GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and datasets, Frog is able to significantly outperform existing GPU-based graph processing systems except Gunrock and MapGraph. MapGraph gets better performance than Frog when running BFS on RoadNet-CA. The comparison between Gunrock and Frog is inconclusive. Frog can outperform Gunrock more than 1.04X when running PageRank and SSSP, while the advantage of Frog is not obvious when running BFS and CC on some datasets especially for RoadNet-CA.« less
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
A strand graph semantics for DNA-based computation
Petersen, Rasmus L.; Lakin, Matthew R.; Phillips, Andrew
2015-01-01
DNA nanotechnology is a promising approach for engineering computation at the nanoscale, with potential applications in biofabrication and intelligent nanomedicine. DNA strand displacement is a general strategy for implementing a broad range of nanoscale computations, including any computation that can be expressed as a chemical reaction network. Modelling and analysis of DNA strand displacement systems is an important part of the design process, prior to experimental realisation. As experimental techniques improve, it is important for modelling languages to keep pace with the complexity of structures that can be realised experimentally. In this paper we present a process calculus for modelling DNA strand displacement computations involving rich secondary structures, including DNA branches and loops. We prove that our calculus is also sufficiently expressive to model previous work on non-branching structures, and propose a mapping from our calculus to a canonical strand graph representation, in which vertices represent DNA strands, ordered sites represent domains, and edges between sites represent bonds between domains. We define interactions between strands by means of strand graph rewriting, and prove the correspondence between the process calculus and strand graph behaviours. Finally, we propose a mapping from strand graphs to an efficient implementation, which we use to perform modelling and simulation of DNA strand displacement systems with rich secondary structure. PMID:27293306
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
The Full Ward-Takahashi Identity for Colored Tensor Models
NASA Astrophysics Data System (ADS)
Pérez-Sánchez, Carlos I.
2018-03-01
Colored tensor models (CTM) is a random geometrical approach to quantum gravity. We scrutinize the structure of the connected correlation functions of general CTM-interactions and organize them by boundaries of Feynman graphs. For rank- D interactions including, but not restricted to, all melonic φ^4 -vertices—to wit, solely those quartic vertices that can lead to dominant spherical contributions in the large- N expansion—the aforementioned boundary graphs are shown to be precisely all (possibly disconnected) vertex-bipartite regularly edge- D-colored graphs. The concept of CTM-compatible boundary-graph automorphism is introduced and an auxiliary graph calculus is developed. With the aid of these constructs, certain U (∞)-invariance of the path integral measure is fully exploited in order to derive a strong Ward-Takahashi Identity for CTMs with a symmetry-breaking kinetic term. For the rank-3 φ^4 -theory, we get the exact integral-like equation for the 2-point function. Similarly, exact equations for higher multipoint functions can be readily obtained departing from this full Ward-Takahashi identity. Our results hold for some Group Field Theories as well. Altogether, our non-perturbative approach trades some graph theoretical methods for analytical ones. We believe that these tools can be extended to tensorial SYK-models.
Huang, Xiaoke; Zhao, Ye; Yang, Jing; Zhang, Chong; Ma, Chao; Ye, Xinyue
2016-01-01
We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.
NASA Astrophysics Data System (ADS)
Viana, Ilisio; Orteu, Jean-José; Cornille, Nicolas; Bugarin, Florian
2015-11-01
We focus on quality control of mechanical parts in aeronautical context using a single pan-tilt-zoom (PTZ) camera and a computer-aided design (CAD) model of the mechanical part. We use the CAD model to create a theoretical image of the element to be checked, which is further matched with the sensed image of the element to be inspected, using a graph theory-based approach. The matching is carried out in two stages. First, the two images are used to create two attributed graphs representing the primitives (ellipses and line segments) in the images. In the second stage, the graphs are matched using a similarity function built from the primitive parameters. The similarity scores of the matching are injected in the edges of a bipartite graph. A best-match-search procedure in the bipartite graph guarantees the uniqueness of the match solution. The method achieves promising performance in tests with synthetic data including missing elements, displaced elements, size changes, and combinations of these cases. The results open good prospects for using the method with realistic data.
Graph Structure in Three National Academic Webs: Power Laws with Anomalies.
ERIC Educational Resources Information Center
Thelwall, Mike; Wilkinson, David
2003-01-01
Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)
GraphStore: A Distributed Graph Storage System for Big Data Networks
ERIC Educational Resources Information Center
Martha, VenkataSwamy
2013-01-01
Networks, such as social networks, are a universal solution for modeling complex problems in real time, especially in the Big Data community. While previous studies have attempted to enhance network processing algorithms, none have paved a path for the development of a persistent storage system. The proposed solution, GraphStore, provides an…
Graph rigidity, cyclic belief propagation, and point pattern matching.
McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S
2008-11-01
A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.
SpectralNET – an application for spectral graph analysis and visualization
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-01-01
Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170
SpectralNET--an application for spectral graph analysis and visualization.
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-10-19
Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.
Hydrogen combustion in tomorrow's energy technology
NASA Astrophysics Data System (ADS)
Peschka, W.
The fundamental characteristics of hydrogen combustion and the current status of hydrogen energy applications technology are reviewed, with an emphasis on research being pursued at DFVLR. Topics addressed include reaction mechanisms and pollution, steady-combustion devices (catalytic heaters, H2/air combustors, H2/O2 rocket engines, H2-fueled jet engines, and gas and steam turbine processes), unsteady combustion (in internal-combustion engines with internal or external mixture formation), and feasibility studies of hydrogen-powered automobiles. Diagrams, drawings, graphs, and photographs are provided.
NASA Technical Reports Server (NTRS)
Miron, Y.; Perlee, H. E.
1974-01-01
The combustion characteristics of hypergolic propellants are described. A research project was conducted to determine if the reaction control system engine propellants on Apollo spacecraft undergo explosive reaction when subjected to conditions present in the engine at the time of ignition. Combustion characteristics pertinent to the hard-start phenomenon are considered. The thermal stability of frozen mixtures of hydrazine-based fuels with nitrogen tetroxide was analyzed. Results of the tests are presented in the form of tables and graphs.
From brain topography to brain topology: relevance of graph theory to functional neuroscience.
Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis
2013-07-10
Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.
Graph cuts for curvature based image denoising.
Bae, Egil; Shi, Juan; Tai, Xue-Cheng
2011-05-01
Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.
Topological Characterization of Carbon Graphite and Crystal Cubic Carbon Structures.
Siddiqui, Wei Gao Muhammad Kamran; Naeem, Muhammad; Rehman, Najma Abdul
2017-09-07
Graph theory is used for modeling, designing, analysis and understanding chemical structures or chemical networks and their properties. The molecular graph is a graph consisting of atoms called vertices and the chemical bond between atoms called edges. In this article, we study the chemical graphs of carbon graphite and crystal structure of cubic carbon. Moreover, we compute and give closed formulas of degree based additive topological indices, namely hyper-Zagreb index, first multiple and second multiple Zagreb indices, and first and second Zagreb polynomials.
An Xdata Architecture for Federated Graph Models and Multi-tier Asymmetric Computing
2014-01-01
Wikipedia, a scale-free random graph (kron), Akamai trace route data, Bitcoin transaction data, and a Twitter follower network. We present results for...3x (SSSP on a random graph) and nearly 300x (Akamai and Bitcoin ) over the CPU performance of a well-known and widely deployed CPU-based graph...provided better throughput for smaller frontiers such as roadmaps or the Bitcoin data set. In our work, we have focused on two-phase kernels, but it
NASA Astrophysics Data System (ADS)
Pelanti, Marica; Shyue, Keh-Ming
2015-05-01
The authors regret that one erroneous plot of the numerical results for a dodecane liquid-vapor shock tube problem was included in Fig. 3, p. 346, of the article [1]. Specifically, the graph of the vapor-liquid temperature difference (Tv -Tl) displayed at the bottom-right corner of Fig. 3 in [1] is not correct due to some wrong settings introduced in the temperature visualization tool. The error pertains solely to simulation data post-processing, and it is not related to the numerical methods and programs employed to run the experiment. We display here in Fig. 1 the correct temperature difference plot, generated from our original results computed for the dodecane shock tube test described in [1]. We think that is important to notify this correction to avoid any confusion.
Nagoor Gani, A; Latha, S R
2016-01-01
A Hamiltonian cycle in a graph is a cycle that visits each node/vertex exactly once. A graph containing a Hamiltonian cycle is called a Hamiltonian graph. There have been several researches to find the number of Hamiltonian cycles of a Hamilton graph. As the number of vertices and edges grow, it becomes very difficult to keep track of all the different ways through which the vertices are connected. Hence, analysis of large graphs can be efficiently done with the assistance of a computer system that interprets graphs as matrices. And, of course, a good and well written algorithm will expedite the analysis even faster. The most convenient way to quickly test whether there is an edge between two vertices is to represent graphs using adjacent matrices. In this paper, a new algorithm is proposed to find fuzzy Hamiltonian cycle using adjacency matrix and the degree of the vertices of a fuzzy graph. A fuzzy graph structure is also modeled to illustrate the proposed algorithms with the selected air network of Indigo airlines.
Measuring Graph Comprehension, Critique, and Construction in Science
NASA Astrophysics Data System (ADS)
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-08-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed items to measure graph comprehension, critique, and construction and developed scoring rubrics based on the knowledge integration (KI) framework. We administered the items to over 460 middle school students. We found that the items formed a coherent scale and had good reliability using both item response theory and classical test theory. The KI scoring rubric showed that most students had difficulty linking graphs features to science concepts, especially when asked to critique or construct graphs. In addition, students with limited access to computers as well as those who speak a language other than English at home have less integrated understanding than others. These findings point to the need to increase the integration of graphing into science instruction. The results suggest directions for further research leading to comprehensive assessments of graph understanding.
On a programming language for graph algorithms
NASA Technical Reports Server (NTRS)
Rheinboldt, W. C.; Basili, V. R.; Mesztenyi, C. K.
1971-01-01
An algorithmic language, GRAAL, is presented for describing and implementing graph algorithms of the type primarily arising in applications. The language is based on a set algebraic model of graph theory which defines the graph structure in terms of morphisms between certain set algebraic structures over the node set and arc set. GRAAL is modular in the sense that the user specifies which of these mappings are available with any graph. This allows flexibility in the selection of the storage representation for different graph structures. In line with its set theoretic foundation, the language introduces sets as a basic data type and provides for the efficient execution of all set and graph operators. At present, GRAAL is defined as an extension of ALGOL 60 (revised) and its formal description is given as a supplement to the syntactic and semantic definition of ALGOL. Several typical graph algorithms are written in GRAAL to illustrate various features of the language and to show its applicability.
NASA Astrophysics Data System (ADS)
Kase, Sue E.; Vanni, Michelle; Knight, Joanne A.; Su, Yu; Yan, Xifeng
2016-05-01
Within operational environments decisions must be made quickly based on the information available. Identifying an appropriate knowledge base and accurately formulating a search query are critical tasks for decision-making effectiveness in dynamic situations. The spreading of graph data management tools to access large graph databases is a rapidly emerging research area of potential benefit to the intelligence community. A graph representation provides a natural way of modeling data in a wide variety of domains. Graph structures use nodes, edges, and properties to represent and store data. This research investigates the advantages of information search by graph query initiated by the analyst and interactively refined within the contextual dimensions of the answer space toward a solution. The paper introduces SLQ, a user-friendly graph querying system enabling the visual formulation of schemaless and structureless graph queries. SLQ is demonstrated with an intelligence analyst information search scenario focused on identifying individuals responsible for manufacturing a mosquito-hosted deadly virus. The scenario highlights the interactive construction of graph queries without prior training in complex query languages or graph databases, intuitive navigation through the problem space, and visualization of results in graphical format.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
Approximate ground states of the random-field Potts model from graph cuts
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay
2018-05-01
While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
Flows in a tube structure: Equation on the graph
NASA Astrophysics Data System (ADS)
Panasenko, Grigory; Pileckas, Konstantin
2014-08-01
The steady-state Navier-Stokes equations in thin structures lead to some elliptic second order equation for the macroscopic pressure on a graph. At the nodes of the graph the pressure satisfies Kirchoff-type junction conditions. In the non-steady case the problem for the macroscopic pressure on the graph becomes nonlocal in time. In the paper we study the existence and uniqueness of a solution to such one-dimensional model on the graph for a pipe-wise network. We also prove the exponential decay of the solution with respect to the time variable in the case when the data decay exponentially with respect to time.
Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.
Shang, Yilun
2015-01-01
Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices.
Zhang, Pin; Liang, Yanmei; Chang, Shengjiang; Fan, Hailun
2013-08-01
Accurate segmentation of renal tissues in abdominal computed tomography (CT) image sequences is an indispensable step for computer-aided diagnosis and pathology detection in clinical applications. In this study, the goal is to develop a radiology tool to extract renal tissues in CT sequences for the management of renal diagnosis and treatments. In this paper, the authors propose a new graph-cuts-based active contours model with an adaptive width of narrow band for kidney extraction in CT image sequences. Based on graph cuts and contextual continuity, the segmentation is carried out slice-by-slice. In the first stage, the middle two adjacent slices in a CT sequence are segmented interactively based on the graph cuts approach. Subsequently, the deformable contour evolves toward the renal boundaries by the proposed model for the kidney extraction of the remaining slices. In this model, the energy function combining boundary with regional information is optimized in the constructed graph and the adaptive search range is determined by contextual continuity and the object size. In addition, in order to reduce the complexity of the min-cut computation, the nodes in the graph only have n-links for fewer edges. The total 30 CT images sequences with normal and pathological renal tissues are used to evaluate the accuracy and effectiveness of our method. The experimental results reveal that the average dice similarity coefficient of these image sequences is from 92.37% to 95.71% and the corresponding standard deviation for each dataset is from 2.18% to 3.87%. In addition, the average automatic segmentation time for one kidney in each slice is about 0.36 s. Integrating the graph-cuts-based active contours model with contextual continuity, the algorithm takes advantages of energy minimization and the characteristics of image sequences. The proposed method achieves effective results for kidney segmentation in CT sequences.
Graph mining for next generation sequencing: leveraging the assembly graph for biological insights.
Warnke-Sommer, Julia; Ali, Hesham
2016-05-06
The assembly of Next Generation Sequencing (NGS) reads remains a challenging task. This is especially true for the assembly of metagenomics data that originate from environmental samples potentially containing hundreds to thousands of unique species. The principle objective of current assembly tools is to assemble NGS reads into contiguous stretches of sequence called contigs while maximizing for both accuracy and contig length. The end goal of this process is to produce longer contigs with the major focus being on assembly only. Sequence read assembly is an aggregative process, during which read overlap relationship information is lost as reads are merged into longer sequences or contigs. The assembly graph is information rich and capable of capturing the genomic architecture of an input read data set. We have developed a novel hybrid graph in which nodes represent sequence regions at different levels of granularity. This model, utilized in the assembly and analysis pipeline Focus, presents a concise yet feature rich view of a given input data set, allowing for the extraction of biologically relevant graph structures for graph mining purposes. Focus was used to create hybrid graphs to model metagenomics data sets obtained from the gut microbiomes of five individuals with Crohn's disease and eight healthy individuals. Repetitive and mobile genetic elements are found to be associated with hybrid graph structure. Using graph mining techniques, a comparative study of the Crohn's disease and healthy data sets was conducted with focus on antibiotics resistance genes associated with transposase genes. Results demonstrated significant differences in the phylogenetic distribution of categories of antibiotics resistance genes in the healthy and diseased patients. Focus was also evaluated as a pure assembly tool and produced excellent results when compared against the Meta-velvet, Omega, and UD-IDBA assemblers. Mining the hybrid graph can reveal biological phenomena captured by its structure. We demonstrate the advantages of considering assembly graphs as data-mining support in addition to their role as frameworks for assembly.
NASA Astrophysics Data System (ADS)
Holme, Petter; Saramäki, Jari
2012-10-01
A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology-rather, we want to make papers readable across disciplines.
New Graph Models and Algorithms for Detecting Salient Structures from Cluttered Images
2010-02-24
Development of graph models and algorithms to detect boundaries that show certain levels of symmetry, an important geometric property of many...Bookstein. Morphometric tools for landmark data. Cambridge University Press, 1991. [8] F. L. Bookstein. Principal warps: Thin-plate splines and the
Volume simplicity constraint in the Engle-Livine-Pereira-Rovelli spin foam model
NASA Astrophysics Data System (ADS)
Bahr, Benjamin; Belov, Vadim
2018-04-01
We propose a quantum version of the quadratic volume simplicity constraint for the Engle-Livine-Pereira-Rovelli spin foam model. It relies on a formula for the volume of 4-dimensional polyhedra, depending on its bivectors and the knotting class of its boundary graph. While this leads to no further condition for the 4-simplex, the constraint becomes nontrivial for more complicated boundary graphs. We show that, in the semiclassical limit of the hypercuboidal graph, the constraint turns into the geometricity condition observed recently by several authors.
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Evaluating approaches to find exon chains based on long reads.
Kuosmanen, Anna; Norri, Tuukka; Mäkinen, Veli
2018-05-01
Transcript prediction can be modeled as a graph problem where exons are modeled as nodes and reads spanning two or more exons are modeled as exon chains. Pacific Biosciences third-generation sequencing technology produces significantly longer reads than earlier second-generation sequencing technologies, which gives valuable information about longer exon chains in a graph. However, with the high error rates of third-generation sequencing, aligning long reads correctly around the splice sites is a challenging task. Incorrect alignments lead to spurious nodes and arcs in the graph, which in turn lead to incorrect transcript predictions. We survey several approaches to find the exon chains corresponding to long reads in a splicing graph, and experimentally study the performance of these methods using simulated data to allow for sensitivity/precision analysis. Our experiments show that short reads from second-generation sequencing can be used to significantly improve exon chain correctness either by error-correcting the long reads before splicing graph creation, or by using them to create a splicing graph on which the long-read alignments are then projected. We also study the memory and time consumption of various modules, and show that accurate exon chains lead to significantly increased transcript prediction accuracy. The simulated data and in-house scripts used for this article are available at http://www.cs.helsinki.fi/group/gsa/exon-chains/exon-chains-bib.tar.bz2.
Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David
2018-02-01
To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve
2017-12-01
In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.
Computing Strongly Connected Components in the Streaming Model
NASA Astrophysics Data System (ADS)
Laura, Luigi; Santaroni, Federico
In this paper we present the first algorithm to compute the Strongly Connected Components of a graph in the datastream model (W-Stream), where the graph is represented by a stream of edges and we are allowed to produce intermediate output streams. The algorithm is simple, effective, and can be implemented with few lines of code: it looks at each edge in the stream, and selects the appropriate action with respect to a tree T, representing the graph connectivity seen so far. We analyze the theoretical properties of the algorithm: correctness, memory occupation (O(n logn)), per item processing time (bounded by the current height of T), and number of passes (bounded by the maximal height of T). We conclude by presenting a brief experimental evaluation of the algorithm against massive synthetic and real graphs that confirms its effectiveness: with graphs with up to 100M nodes and 4G edges, only few passes are needed, and millions of edges per second are processed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey
In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps tomore » understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.« less
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
The bilinear-biquadratic model on the complete graph
NASA Astrophysics Data System (ADS)
Jakab, Dávid; Szirmai, Gergely; Zimborás, Zoltán
2018-03-01
We study the spin-1 bilinear-biquadratic model on the complete graph of N sites, i.e. when each spin is interacting with every other spin with the same strength. Because of its complete permutation invariance, this Hamiltonian can be rewritten as the linear combination of the quadratic Casimir operators of \
Graph theory applied to noise and vibration control in statistical energy analysis models.
Guasch, Oriol; Cortés, Lluís
2009-06-01
A fundamental aspect of noise and vibration control in statistical energy analysis (SEA) models consists in first identifying and then reducing the energy flow paths between subsystems. In this work, it is proposed to make use of some results from graph theory to address both issues. On the one hand, linear and path algebras applied to adjacency matrices of SEA graphs are used to determine the existence of any order paths between subsystems, counting and labeling them, finding extremal paths, or determining the power flow contributions from groups of paths. On the other hand, a strategy is presented that makes use of graph cut algorithms to reduce the energy flow from a source subsystem to a receiver one, modifying as few internal and coupling loss factors as possible.
Numerical simulation of electron scattering by nanotube junctions
NASA Astrophysics Data System (ADS)
Brüning, J.; Grikurov, V. E.
2008-03-01
We demonstrate the possibility of computing the intensity of electronic transport through various junctions of three-dimensional metallic nanotubes. In particular, we observe that the magnetic field can be used to control the switch of electron in Y-type junctions. Keeping in mind the asymptotic modeling of reliable nanostructures by quantum graphs, we conjecture that the scattering matrix of the graph should be the same as the scattering matrix of its nanosize-prototype. The numerical computation of the latter gives a method for determining the "gluing" conditions at a graph. Exploring this conjecture, we show that the Kirchhoff conditions (which are commonly used on graphs) cannot be applied to model reliable junctions. This work is a natural extension of the paper [1], but it is written in a self-consistent manner.
Simulator for heterogeneous dataflow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
1993-01-01
A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
Directed Laplacians For Fuzzy Autocatalytic Set Of Fuzzy Graph Type-3 Of An Incineration Process
NASA Astrophysics Data System (ADS)
Ahmad, Tahir; Baharun, Sabariah; Bakar, Sumarni Abu
2010-11-01
Fuzzy Autocatalytic Set (FACS) of Fuzzy Graph Type-3 was used in the modeling of a clinical waste incineration process in Malacca. FACS provided more accurate explanations of the incineration process than using crisp graph. In this paper we explore further FACS. Directed and combinatorial Laplacian of FACS are developed and their basic properties are presented.
Coloring geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Percus, Allon; Muller, Tobias
We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less
Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.
Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu
2016-11-01
In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.
On Bipartite Graphs Trees and Their Partial Vertex Covers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caskurlu, Bugra; Mkrtchyan, Vahan; Parekh, Ojas D.
2015-03-01
Graphs can be used to model risk management in various systems. Particularly, Caskurlu et al. in [7] have considered a system, which has threats, vulnerabilities and assets, and which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. One can either restricting the permissions of the users, or encapsulating the system assets. The pointed out two strategies correspond to deleting minimum number of elements corresponding to vulnerabilities and assets, such that the flow between threats and assets is reduced below the predefined threshold level. Itmore » can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the Vertex Cover problem is in P on bipartite graphs, however; the computational complexity of the Partial Vertex Cover problem on bipartite graphs has remained open. In this paper, we establish that the Partial Vertex Cover problem is NP-hard on bipartite graphs, which was also recently independently demonstrated [N. Apollonio and B. Simeone, Discrete Appl. Math., 165 (2014), pp. 37–48; G. Joret and A. Vetta, preprint, arXiv:1211.4853v1 [cs.DS], 2012]. We then identify interesting special cases of bipartite graphs, for which the Partial Vertex Cover problem, the closely related Budgeted Maximum Coverage problem, and their weighted extensions can be solved in polynomial time. We also present an 8/9-approximation algorithm for the Budgeted Maximum Coverage problem in the class of bipartite graphs. We show that this matches and resolves the integrality gap of the natural LP relaxation of the problem and improves upon a recent 4/5-approximation.« less
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Graphing evolutionary pattern and process: a history of techniques in archaeology and paleobiology.
Lyman, R Lee
2009-02-01
Graphs displaying evolutionary patterns are common in paleontology and in United States archaeology. Both disciplines subscribed to a transformational theory of evolution and graphed evolution as a sequence of archetypes in the late nineteenth and early twentieth centuries. U.S. archaeologists in the second decade of the twentieth century, and paleontologists shortly thereafter, developed distinct graphic styles that reflected the Darwinian variational model of evolution. Paleobiologists adopted the view of a species as a set of phenotypically variant individuals and graphed those variations either as central tendencies or as histograms of frequencies of variants. Archaeologists presumed their artifact types reflected cultural norms of prehistoric artisans and the frequency of specimens in each type reflected human choice and type popularity. They graphed cultural evolution as shifts in frequencies of specimens representing each of several artifact types. Confusion of pattern and process is exemplified by a paleobiologist misinterpreting the process illustrated by an archaeological graph, and an archaeologist misinterpreting the process illustrated by a paleobiological graph. Each style of graph displays particular evolutionary patterns and implies particular evolutionary processes. Graphs of a multistratum collection of prehistoric mammal remains and a multistratum collection of artifacts demonstrate that many graph styles can be used for both kinds of collections.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Multi-A Graph Patrolling and Partitioning
NASA Astrophysics Data System (ADS)
Elor, Y.; Bruckstein, A. M.
2012-12-01
We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.
Analyzing locomotion synthesis with feature-based motion graphs.
Mahmudi, Mentar; Kallmann, Marcelo
2013-05-01
We propose feature-based motion graphs for realistic locomotion synthesis among obstacles. Among several advantages, feature-based motion graphs achieve improved results in search queries, eliminate the need of postprocessing for foot skating removal, and reduce the computational requirements in comparison to traditional motion graphs. Our contributions are threefold. First, we show that choosing transitions based on relevant features significantly reduces graph construction time and leads to improved search performances. Second, we employ a fast channel search method that confines the motion graph search to a free channel with guaranteed clearance among obstacles, achieving faster and improved results that avoid expensive collision checking. Lastly, we present a motion deformation model based on Inverse Kinematics applied over the transitions of a solution branch. Each transition is assigned a continuous deformation range that does not exceed the original transition cost threshold specified by the user for the graph construction. The obtained deformation improves the reachability of the feature-based motion graph and in turn also reduces the time spent during search. The results obtained by the proposed methods are evaluated and quantified, and they demonstrate significant improvements in comparison to traditional motion graph techniques.
Information Dynamics in Networks: Models and Algorithms
2016-09-13
Twitter ; we investigated how to detect spam accounts on Facebook and other social networks by graph analytics; and finally we investigated how to design...networks. We investigated the appropriateness of existing mathematical models for explaining the structure of retweet cascades on Twitter ; we investigated...Received Paper 1.00 2.00 3.00 . A Note on Modeling Retweet Cascades on Twitter , Workshop on Algorithms and Models for the Web Graph. 09-DEC-15
A topo-graph model for indistinct target boundary definition from anatomical images.
Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Gong, Guanzhong; Eberl, Stefan; Yin, Yong; Wang, Lisheng; Feng, Dagan; Fulham, Michael
2018-06-01
It can be challenging to delineate the target object in anatomical imaging when the object boundaries are difficult to discern due to the low contrast or overlapping intensity distributions from adjacent tissues. We propose a topo-graph model to address this issue. The first step is to extract a topographic representation that reflects multiple levels of topographic information in an input image. We then define two types of node connections - nesting branches (NBs) and geodesic edges (GEs). NBs connect nodes corresponding to initial topographic regions and GEs link the nodes at a detailed level. The weights for NBs are defined to measure the similarity of regional appearance, and weights for GEs are defined with geodesic and local constraints. NBs contribute to the separation of topographic regions and the GEs assist the delineation of uncertain boundaries. Final segmentation is achieved by calculating the relevance of the unlabeled nodes to the labels by the optimization of a graph-based energy function. We test our model on 47 low contrast CT studies of patients with non-small cell lung cancer (NSCLC), 10 contrast-enhanced CT liver cases and 50 breast and abdominal ultrasound images. The validation criteria are the Dice's similarity coefficient and the Hausdorff distance. Student's t-test show that our model outperformed the graph models with pixel-only, pixel and regional, neighboring and radial connections (p-values <0.05). Our findings show that the topographic representation and topo-graph model provides improved delineation and separation of objects from adjacent tissues compared to the tested models. Copyright © 2018 Elsevier B.V. All rights reserved.
Using a high-dimensional graph of semantic space to model relationships among words
Jackson, Alice F.; Bolger, Donald J.
2014-01-01
The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD). PMID:24860525
Using a high-dimensional graph of semantic space to model relationships among words.
Jackson, Alice F; Bolger, Donald J
2014-01-01
The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD).
Functional network organization of the human brain
Power, Jonathan D; Cohen, Alexander L; Nelson, Steven M; Wig, Gagan S; Barnes, Kelly Anne; Church, Jessica A; Vogel, Alecia C; Laumann, Timothy O; Miezin, Fran M; Schlaggar, Bradley L; Petersen, Steven E
2011-01-01
Summary Real-world complex systems may be mathematically modeled as graphs, revealing properties of the system. Here we study graphs of functional brain organization in healthy adults using resting state functional connectivity MRI. We propose two novel brain-wide graphs, one of 264 putative functional areas, the other a modification of voxelwise networks that eliminates potentially artificial short-distance relationships. These graphs contain many subgraphs in good agreement with known functional brain systems. Other subgraphs lack established functional identities; we suggest possible functional characteristics for these subgraphs. Further, graph measures of the areal network indicate that the default mode subgraph shares network properties with sensory and motor subgraphs: it is internally integrated but isolated from other subgraphs, much like a “processing” system. The modified voxelwise graph also reveals spatial motifs in the patterning of systems across the cortex. PMID:22099467
Ivanciuc, Ovidiu
2013-06-01
Chemical and molecular graphs have fundamental applications in chemoinformatics, quantitative structureproperty relationships (QSPR), quantitative structure-activity relationships (QSAR), virtual screening of chemical libraries, and computational drug design. Chemoinformatics applications of graphs include chemical structure representation and coding, database search and retrieval, and physicochemical property prediction. QSPR, QSAR and virtual screening are based on the structure-property principle, which states that the physicochemical and biological properties of chemical compounds can be predicted from their chemical structure. Such structure-property correlations are usually developed from topological indices and fingerprints computed from the molecular graph and from molecular descriptors computed from the three-dimensional chemical structure. We present here a selection of the most important graph descriptors and topological indices, including molecular matrices, graph spectra, spectral moments, graph polynomials, and vertex topological indices. These graph descriptors are used to define several topological indices based on molecular connectivity, graph distance, reciprocal distance, distance-degree, distance-valency, spectra, polynomials, and information theory concepts. The molecular descriptors and topological indices can be developed with a more general approach, based on molecular graph operators, which define a family of graph indices related by a common formula. Graph descriptors and topological indices for molecules containing heteroatoms and multiple bonds are computed with weighting schemes based on atomic properties, such as the atomic number, covalent radius, or electronegativity. The correlation in QSPR and QSAR models can be improved by optimizing some parameters in the formula of topological indices, as demonstrated for structural descriptors based on atomic connectivity and graph distance.
Graph-Based Object Class Discovery
NASA Astrophysics Data System (ADS)
Xia, Shengping; Hancock, Edwin R.
We are interested in the problem of discovering the set of object classes present in a database of images using a weakly supervised graph-based framework. Rather than making use of the ”Bag-of-Features (BoF)” approach widely used in current work on object recognition, we represent each image by a graph using a group of selected local invariant features. Using local feature matching and iterative Procrustes alignment, we perform graph matching and compute a similarity measure. Borrowing the idea of query expansion , we develop a similarity propagation based graph clustering (SPGC) method. Using this method class specific clusters of the graphs can be obtained. Such a cluster can be generally represented by using a higher level graph model whose vertices are the clustered graphs, and the edge weights are determined by the pairwise similarity measure. Experiments are performed on a dataset, in which the number of images increases from 1 to 50K and the number of objects increases from 1 to over 500. Some objects have been discovered with total recall and a precision 1 in a single cluster.
Unsupervised classification of multivariate geostatistical data: Two algorithms
NASA Astrophysics Data System (ADS)
Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques
2015-12-01
With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.
Graph-based structural change detection for rotating machinery monitoring
NASA Astrophysics Data System (ADS)
Lu, Guoliang; Liu, Jie; Yan, Peng
2018-01-01
Detection of structural changes is critically important in operational monitoring of a rotating machine. This paper presents a novel framework for this purpose, where a graph model for data modeling is adopted to represent/capture statistical dynamics in machine operations. Meanwhile we develop a numerical method for computing temporal anomalies in the constructed graphs. The martingale-test method is employed for the change detection when making decisions on possible structural changes, where excellent performance is demonstrated outperforming exciting results such as the autoregressive-integrated-moving average (ARIMA) model. Comprehensive experimental results indicate good potentials of the proposed algorithm in various engineering applications. This work is an extension of a recent result (Lu et al., 2017).
Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan
2016-01-01
Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less
Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.
NASA Astrophysics Data System (ADS)
Stallard, A. P.; Smith, S. R.; Elya, J. L.
2016-12-01
The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
An alternative database approach for management of SNOMED CT and improved patient data queries.
Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R
2015-10-01
SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, Jacob G.
2013-01-11
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH){sub 4}-H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results determined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, Jacob G.
2013-07-01
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOHNaAl(OH){sub 4}-H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results determined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components. (authors)« less
Mathematics of Web science: structure, dynamics and incentives.
Chayes, Jennifer
2013-03-28
Dr Chayes' talk described how, to a discrete mathematician, 'all the world's a graph, and all the people and domains merely vertices'. A graph is represented as a set of vertices V and a set of edges E, so that, for instance, in the World Wide Web, V is the set of pages and E the directed hyperlinks; in a social network, V is the people and E the set of relationships; and in the autonomous system Internet, V is the set of autonomous systems (such as AOL, Yahoo! and MSN) and E the set of connections. This means that mathematics can be used to study the Web (and other large graphs in the online world) in the following way: first, we can model online networks as large finite graphs; second, we can sample pieces of these graphs; third, we can understand and then control processes on these graphs; and fourth, we can develop algorithms for these graphs and apply them to improve the online experience.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Visweswara Sathanur, Arun; Choudhury, Sutanay; Joslyn, Cliff A.
Property graphs can be used to represent heterogeneous networks with attributed vertices and edges. Given one property graph, simulating another graph with same or greater size with identical statistical properties with respect to the attributes and connectivity is critical for privacy preservation and benchmarking purposes. In this work we tackle the problem of capturing the statistical dependence of the edge connectivity on the vertex labels and using the same distribution to regenerate property graphs of the same or expanded size in a scalable manner. However, accurate simulation becomes a challenge when the attributes do not completely explain the network structure.more » We propose the Property Graph Model (PGM) approach that uses an attribute (or label) augmentation strategy to mitigate the problem and preserve the graph connectivity as measured via degree distribution, vertex label distributions and edge connectivity. Our proposed algorithm is scalable with a linear complexity in the number of edges in the target graph. We illustrate the efficacy of the PGM approach in regenerating and expanding the datasets by leveraging two distinct illustrations.« less
Böhlke, John Karl
2006-01-01
Atmospheric environmental tracers commonly used to date groundwater on timescales of years to decades include CFC-11, CFC-12, CFC-113, SF6, 85Kr, 3 H and 3 H/3 H0 , where 3 H0 refers to initial tritium (3 H + tritiogenic 3 He) (Cook and Herczeg, 2000). Interpretation of age from environmental tracer data may be relatively simple for a water sample with a single age, but the interpretation is more complex for a sample that is a mixture of waters of varying ages. A mixture can be a natural result of convergence of flow lines to a discharge area such as a spring or stream, or it can be an artefact of sampling a long-screen well. TRACERMODEL1 contains a worksheet that can be used to determine hypothetical concentrations of atmospheric environmental tracers in water samples with several different age distributions. It is designed to permit plotting of ages and tracer concentrations in a variety of different combinations to facilitate interpretation of measurements. TRACERMODEL1 includes several different types of graphs that are linked to the calculations. The spreadsheet and accompanying graphs can be modified for specific applications. For example, the selection of atmospheric environmental tracers can be changed to reflect analytes of interest, the input tracer data can be modified to reflect local conditions or different timescales, and the analytes of interest can include other types of non-point-source contaminants, such as nitrate (Böhlke, 2002). Previous versions of this workbook have been used to evaluate field data in studies of groundwater residence time and agricultural contamination (Böhlke and Denver, 1995; Focazio et al., 1998; Katz et al., 1999; Katz et al., 2001; Plummer et al., 2001; Böhlke and Krantz, 2003; Lindsey et al., 2003).
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
Highly Asynchronous VisitOr Queue Graph Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, R.
2012-10-01
HAVOQGT is a C++ framework that can be used to create highly parallel graph traversal algorithms. The framework stores the graph and algorithmic data structures on external memory that is typically mapped to high performance locally attached NAND FLASH arrays. The framework supports a vertex-centered visitor programming model. The frameworkd has been used to implement breadth first search, connected components, and single source shortest path.
NASA Astrophysics Data System (ADS)
Lecoeur, Jérémy; Ferré, Jean-Christophe; Collins, D. Louis; Morrisey, Sean P.; Barillot, Christian
2009-02-01
A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence. Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI. Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification). As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%. We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.
Volatility behavior of visibility graph EMD financial time series from Ising interacting system
NASA Astrophysics Data System (ADS)
Zhang, Bo; Wang, Jun; Fang, Wen
2015-08-01
A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.
Graph Theory and the High School Student.
ERIC Educational Resources Information Center
Chartrand, Gary; Wall, Curtiss E.
1980-01-01
Graph theory is presented as a tool to instruct high school mathematics students. A variety of real world problems can be modeled which help students recognize the importance and difficulty of applying mathematics. (MP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coram, Jamie L.; Morrow, James D.; Perkins, David Nikolaus
2015-09-01
This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using bothmore » geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.« less
NASA Astrophysics Data System (ADS)
Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke
2017-05-01
Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yakovlev, A. A.; Sorokin, V. S.; Mishustina, S. N.; Proidakova, N. V.; Postupaeva, S. G.
2017-01-01
The article describes a new method of search design of refrigerating systems, the basis of which is represented by a graph model of the physical operating principle based on thermodynamical description of physical processes. The mathematical model of the physical operating principle has been substantiated, and the basic abstract theorems relatively semantic load applied to nodes and edges of the graph have been represented. The necessity and the physical operating principle, sufficient for the given model and intended for the considered device class, were demonstrated by the example of a vapour-compression refrigerating plant. The example of obtaining a multitude of engineering solutions of a vapour-compression refrigerating plant has been considered.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
Brettin, Thomas S.; Cottingham, Robert W.; Griffith, Shelton D.; Quest, Daniel J.
2015-09-08
A system and method of integrating diverse sources of data and data streams is presented. The method can include selecting a scenario based on a topic, creating a multi-relational directed graph based on the scenario, identifying and converting resources in accordance with the scenario and updating the multi-directed graph based on the resources, identifying data feeds in accordance with the scenario and updating the multi-directed graph based on the data feeds, identifying analytical routines in accordance with the scenario and updating the multi-directed graph using the analytical routines and identifying data outputs in accordance with the scenario and defining queries to produce the data outputs from the multi-directed graph.
NASA Astrophysics Data System (ADS)
Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy
Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.
NASA Astrophysics Data System (ADS)
Buscema, Massimo; Asadi-Zeydabadi, Masoud; Lodwick, Weldon; Breda, Marco
2016-04-01
Significant applications such as the analysis of Alzheimer's disease differentiated from dementia, or in data mining of social media, or in extracting information of drug cartel structural composition, are often modeled as graphs. The structural or topological complexity or lack of it in a graph is quite often useful in understanding and more importantly, resolving the problem. We are proposing a new index we call the H0function to measure the structural/topological complexity of a graph. To do this, we introduce the concept of graph pruning and its associated algorithm that is used in the development of our measure. We illustrate the behavior of our measure, the H0 function, through different examples found in the appendix. These examples indicate that the H0 function contains information that is useful and important characteristics of a graph. Here, we restrict ourselves to undirected.
A Research Graph dataset for connecting research data repositories using RD-Switchboard.
Aryani, Amir; Poblet, Marta; Unsworth, Kathryn; Wang, Jingbo; Evans, Ben; Devaraju, Anusuriya; Hausstein, Brigitte; Klas, Claus-Peter; Zapilko, Benjamin; Kaplun, Samuele
2018-05-29
This paper describes the open access graph dataset that shows the connections between Dryad, CERN, ANDS and other international data repositories to publications and grants across multiple research data infrastructures. The graph dataset was created using the Research Graph data model and the Research Data Switchboard (RD-Switchboard), a collaborative project by the Research Data Alliance DDRI Working Group (DDRI WG) with the aim to discover and connect the related research datasets based on publication co-authorship or jointly funded grants. The graph dataset allows researchers to trace and follow the paths to understanding a body of work. By mapping the links between research datasets and related resources, the graph dataset improves both their discovery and visibility, while avoiding duplicate efforts in data creation. Ultimately, the linked datasets may spur novel ideas, facilitate reproducibility and re-use in new applications, stimulate combinatorial creativity, and foster collaborations across institutions.
Disease management research using event graphs.
Allore, H G; Schruben, L W
2000-08-01
Event Graphs, conditional representations of stochastic relationships between discrete events, simulate disease dynamics. In this paper, we demonstrate how Event Graphs, at an appropriate abstraction level, also extend and organize scientific knowledge about diseases. They can identify promising treatment strategies and directions for further research and provide enough detail for testing combinations of new medicines and interventions. Event Graphs can be enriched to incorporate and validate data and test new theories to reflect an expanding dynamic scientific knowledge base and establish performance criteria for the economic viability of new treatments. To illustrate, an Event Graph is developed for mastitis, a costly dairy cattle disease, for which extensive scientific literature exists. With only a modest amount of imagination, the methodology presented here can be seen to apply modeling to any disease, human, plant, or animal. The Event Graph simulation presented here is currently being used in research and in a new veterinary epidemiology course. Copyright 2000 Academic Press.
A system for routing arbitrary directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1987-01-01
There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.
Exact and approximate graph matching using random walks.
Gori, Marco; Maggini, Marco; Sarti, Lorenzo
2005-07-01
In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.
Compound analysis via graph kernels incorporating chirality.
Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya
2010-12-01
High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.
Evolutionary Games of Multiplayer Cooperation on Graphs
Arranz, Jordi; Traulsen, Arne
2016-01-01
There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946
ERIC Educational Resources Information Center
Katz, Irvin R.; Xi, Xiaoming; Kim, Hyun-Joo; Cheng, Peter C. H.
2004-01-01
This research applied a cognitive model to identify item features that lead to irrelevant variance on the Test of Spoken English[TM] (TSE[R]). The TSE is an assessment of English oral proficiency and includes an item that elicits a description of a statistical graph. This item type sometimes appears to tap graph-reading skills--an irrelevant…
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.
2012-06-01
Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.
NASA Astrophysics Data System (ADS)
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
A graph model for preventing railway accidents based on the maximal information coefficient
NASA Astrophysics Data System (ADS)
Shao, Fubo; Li, Keping
2017-01-01
A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.
Multiscale weighted colored graphs for protein flexibility and rigidity analysis
NASA Astrophysics Data System (ADS)
Bramer, David; Wei, Guo-Wei
2018-02-01
Protein structural fluctuation, measured by Debye-Waller factors or B-factors, is known to correlate to protein flexibility and function. A variety of methods has been developed for protein Debye-Waller factor prediction and related applications to domain separation, docking pose ranking, entropy calculation, hinge detection, stability analysis, etc. Nevertheless, none of the current methodologies are able to deliver an accuracy of 0.7 in terms of the Pearson correlation coefficients averaged over a large set of proteins. In this work, we introduce a paradigm-shifting geometric graph model, multiscale weighted colored graph (MWCG), to provide a new generation of computational algorithms to significantly change the current status of protein structural fluctuation analysis. Our MWCG model divides a protein graph into multiple subgraphs based on interaction types between graph nodes and represents the protein rigidity by generalized centralities of subgraphs. MWCGs not only predict the B-factors of protein residues but also accurately analyze the flexibility of all atoms in a protein. The MWCG model is validated over a number of protein test sets and compared with many standard methods. An extensive numerical study indicates that the proposed MWCG offers an accuracy of over 0.8 and thus provides perhaps the first reliable method for estimating protein flexibility and B-factors. It also simultaneously predicts all-atom flexibility in a molecule.
A Model of Knowledge Based Information Retrieval with Hierarchical Concept Graph.
ERIC Educational Resources Information Center
Kim, Young Whan; Kim, Jin H.
1990-01-01
Proposes a model of knowledge-based information retrieval (KBIR) that is based on a hierarchical concept graph (HCG) which shows relationships between index terms and constitutes a hierarchical thesaurus as a knowledge base. Conceptual distance between a query and an object is discussed and the use of Boolean operators is described. (25…
Evaluation of Teaching the IS-LM Model through a Simulation Program
ERIC Educational Resources Information Center
Pablo-Romero, Maria del Populo; Pozo-Barajas, Rafael; Gomez-Calero, Maria de la Palma
2012-01-01
The IS-ML model is a basic tool used in the teaching of short-term macroeconomics. Teaching is essentially done through the use of graphs. However, the way these graphs are traditionally taught does not allow the learner to easily visualise changes in the curves. The IS-LM simulation program overcomes difficulties encountered in understanding the…
Gonzalo-Lumbreras, R; Izquierdo-Hornillos, R
2000-05-26
An HPLC separation of a complex mixture containing 13 urinary anabolics and corticoids, and boldenone and bolasterone (synthetic anabolics) has been carried out. The applied optimization method involved the use of binary, ternary and quaternary mobile phases containing acetonitrile, methanol or tetrahydrofuran as organic modifiers. The effect of different reversed-phase packings and temperature on the separation was studied. The optimum separation was achieved by using a water-acetonitrile (60:40, v/v) mobile phase in reversed-phase HPLC at 30 degrees C, allowing the separation of all the analytes in about 24 min. Calibration graphs were obtained using bolasterone or methyltestosterone as internal standards. Detection limits were in the range 0.012-0.107 microg ml(-1). The optimized separation was applied to the analysis, after liquid-liquid extraction, of human urine samples spiked with steroids.
Study of metal transfer in CO2 laser+GMAW-P hybrid welding using argon-helium mixtures
NASA Astrophysics Data System (ADS)
Zhang, Wang; Hua, Xueming; Liao, Wei; Li, Fang; Wang, Min
2014-03-01
The metal transfer in CO2 Laser+GMAW-P hybrid welding by using argon-helium mixtures was investigated and the effect of the laser on the mental transfer is discussed. A 650 nm laser, in conjunction with the shadow graph technique, is used to observe the metal transfer process. In order to analyze the heat input to the droplet and the droplet internal current line distribution. An optical emission spectroscopy system was employed to estimate default parameter and optimized plasma temperature, electron number densities distribution. The results indicate that the CO2 plasma plume have a significant impact to the electrode melting, droplet formation, detachment, impingement onto the workpiece and weld morphology. Since the current distribution direction flow changes to the keyhole, to obtain a metal transfer mode of one droplet per pulse, the welding parameters should be adjusted to a higher pulse time (TP) and a lower voltage.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-01-01
A comparative study of smart spectrophotometric techniques for the simultaneous determination of Omeprazole (OMP), Tinidazole (TIN) and Doxycycline (DOX) without prior separation steps is developed. These techniques consist of several consecutive steps utilizing zero/or ratio/or derivative spectra. The proposed techniques adopt nine simple different methods, namely direct spectrophotometry, dual wavelength, first derivative-zero crossing, amplitude factor, spectrum subtraction, ratio subtraction, derivative ratio-zero crossing, constant center, and successive derivative ratio method. The calibration graphs are linear over the concentration range of 1-20 μg/mL, 5-40 μg/mL and 2-30 μg/mL for OMP, TIN and DOX, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and successfully applied to commercial pharmaceutical preparation. The methods that are validated according to the ICH guidelines, accuracy, precision, and repeatability, were found to be within the acceptable limits.
Modeling heterogeneous processor scheduling for real time systems
NASA Technical Reports Server (NTRS)
Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.
1994-01-01
A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.
Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-12-01
Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.
NASA Astrophysics Data System (ADS)
Szyjka, Sebastian P.
The purpose of this study was to determine the extent to which six cognitive and attitudinal variables predicted pre-service elementary teachers' performance on line graphing. Predictors included Illinois teacher education basic skills sub-component scores in reading comprehension and mathematics, logical thinking performance scores, as well as measures of attitudes toward science, mathematics and graphing. This study also determined the strength of the relationship between each prospective predictor variable and the line graphing performance variable, as well as the extent to which measures of attitude towards science, mathematics and graphing mediated relationships between scores on mathematics, reading, logical thinking and line graphing. Ninety-four pre-service elementary education teachers enrolled in two different elementary science methods courses during the spring 2009 semester at Southern Illinois University Carbondale participated in this study. Each subject completed five different instruments designed to assess science, mathematics and graphing attitudes as well as logical thinking and graphing ability. Sixty subjects provided copies of primary basic skills score reports that listed subset scores for both reading comprehension and mathematics. The remaining scores were supplied by a faculty member who had access to a database from which the scores were drawn. Seven subjects, whose scores could not be found, were eliminated from final data analysis. Confirmatory factor analysis (CFA) was conducted in order to establish validity and reliability of the Questionnaire of Attitude Toward Line Graphs in Science (QALGS) instrument. CFA tested the statistical hypothesis that the five main factor structures within the Questionnaire of Attitude Toward Statistical Graphs (QASG) would be maintained in the revised QALGS. Stepwise Regression Analysis with backward elimination was conducted in order to generate a parsimonious and precise predictive model. This procedure allowed the researcher to explore the relationships among the affective and cognitive variables that were included in the regression analysis. The results for CFA indicated that the revised QALGS measure was sound in its psychometric properties when tested against the QASG. Reliability statistics indicated that the overall reliability for the 32 items in the QALGS was .90. The learning preferences construct had the lowest reliability (.67), while enjoyment (.89), confidence (.86) and usefulness (.77) constructs had moderate to high reliabilities. The first four measurement models fit the data well as indicated by the appropriate descriptive and statistical indices. However, the fifth measurement model did not fit the data well statistically, and only fit well with two descriptive indices. The results addressing the research question indicated that mathematical and logical thinking ability were significant predictors of line graph performance among the remaining group of variables. These predictors accounted for 41% of the total variability on the line graph performance variable. Partial correlation coefficients indicated that mathematics ability accounted for 20.5% of the variance on the line graphing performance variable when removing the effect of logical thinking. The logical thinking variable accounted for 4.7% of the variance on the line graphing performance variable when removing the effect of mathematics ability.
Results on Vertex Degree and K-Connectivity in Uniform S-Intersection Graphs
2014-01-01
distribution. A uniform s-intersection graph models the topology of a secure wireless sensor network employing the widely used s-composite key predistribution scheme. Our theoretical findings is also confirmed by numerical results.
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Bootstrapping Security Policies for Wearable Apps Using Attributed Structural Graphs.
González-Tablas, Ana I; Tapiador, Juan E
2016-05-11
We address the problem of bootstrapping security and privacy policies for newly-deployed apps in wireless body area networks (WBAN) composed of smartphones, sensors and other wearable devices. We introduce a framework to model such a WBAN as an undirected graph whose vertices correspond to devices, apps and app resources, while edges model structural relationships among them. This graph is then augmented with attributes capturing the features of each entity together with user-defined tags. We then adapt available graph-based similarity metrics to find the closest app to a new one to be deployed, with the aim of reusing, and possibly adapting, its security policy. We illustrate our approach through a detailed smartphone ecosystem case study. Our results suggest that the scheme can provide users with a reasonably good policy that is consistent with the user's security preferences implicitly captured by policies already in place.
Bootstrapping Security Policies for Wearable Apps Using Attributed Structural Graphs
González-Tablas, Ana I.; Tapiador, Juan E.
2016-01-01
We address the problem of bootstrapping security and privacy policies for newly-deployed apps in wireless body area networks (WBAN) composed of smartphones, sensors and other wearable devices. We introduce a framework to model such a WBAN as an undirected graph whose vertices correspond to devices, apps and app resources, while edges model structural relationships among them. This graph is then augmented with attributes capturing the features of each entity together with user-defined tags. We then adapt available graph-based similarity metrics to find the closest app to a new one to be deployed, with the aim of reusing, and possibly adapting, its security policy. We illustrate our approach through a detailed smartphone ecosystem case study. Our results suggest that the scheme can provide users with a reasonably good policy that is consistent with the user’s security preferences implicitly captured by policies already in place. PMID:27187385
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Properties of heuristic search strategies
NASA Technical Reports Server (NTRS)
Vanderbrug, G. J.
1973-01-01
A directed graph is used to model the search space of a state space representation with single input operators, an AND/OR is used for problem reduction representations, and a theorem proving graph is used for state space representations with multiple input operators. These three graph models and heuristic strategies for searching them are surveyed. The completeness, admissibility, and optimality properties of search strategies which use the evaluation function f = (1 - omega)g = omega(h) are presented and interpreted using a representation of the search process in the plane. The use of multiple output operators to imply dependent successors, and thus obtain a formalism which includes all three types of representations, is discussed.
Application of dynamic uncertain causality graph in spacecraft fault diagnosis: Logic cycle
NASA Astrophysics Data System (ADS)
Yao, Quanying; Zhang, Qin; Liu, Peng; Yang, Ping; Zhu, Ma; Wang, Xiaochen
2017-04-01
Intelligent diagnosis system are applied to fault diagnosis in spacecraft. Dynamic Uncertain Causality Graph (DUCG) is a new probability graphic model with many advantages. In the knowledge expression of spacecraft fault diagnosis, feedback among variables is frequently encountered, which may cause directed cyclic graphs (DCGs). Probabilistic graphical models (PGMs) such as bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning, but BN does not allow DCGs. In this paper, DUGG is applied to fault diagnosis in spacecraft: introducing the inference algorithm for the DUCG to deal with feedback. Now, DUCG has been tested in 16 typical faults with 100% diagnosis accuracy.
Cui, Meng; Yang, Shuo; Yu, Tong; Yang, Ce; Gao, Yonghong; Zhu, Haiyan
2013-10-01
To design a model to capture information on the state and trends of knowledge creation, at both an individual and an organizational level, in order to enhance knowledge management. We designed a graph-theoretic knowledge model, the expert knowledge map (EKM), based on literature-based annotation. A case study in the domain of Traditional Chinese Medicine research was used to illustrate the usefulness of the model. The EKM successfully captured various aspects of knowledge and enhanced knowledge management within the case-study organization through the provision of knowledge graphs, expert graphs, and expert-knowledge biography. Our model could help to reveal the hot topics, trends, and products of the research done by an organization. It can potentially be used to facilitate knowledge learning, sharing and decision-making among researchers, academicians, students, and administrators of organizations.
Spatial-temporal causal modeling: a data centric approach to climate change attribution (Invited)
NASA Astrophysics Data System (ADS)
Lozano, A. C.
2010-12-01
Attribution of climate change has been predominantly based on simulations using physical climate models. These approaches rely heavily on the employed models and are thus subject to their shortcomings. Given the physical models’ limitations in describing the complex system of climate, we propose an alternative approach to climate change attribution that is data centric in the sense that it relies on actual measurements of climate variables and human and natural forcing factors. We present a novel class of methods to infer causality from spatial-temporal data, as well as a procedure to incorporate extreme value modeling into our methodology in order to address the attribution of extreme climate events. We develop a collection of causal modeling methods using spatio-temporal data that combine graphical modeling techniques with the notion of Granger causality. “Granger causality” is an operational definition of causality from econometrics, which is based on the premise that if a variable causally affects another, then the past values of the former should be helpful in predicting the future values of the latter. In its basic version, our methodology makes use of the spatial relationship between the various data points, but treats each location as being identically distributed and builds a unique causal graph that is common to all locations. A more flexible framework is then proposed that is less restrictive than having a single causal graph common to all locations, while avoiding the brittleness due to data scarcity that might arise if one were to independently learn a different graph for each location. The solution we propose can be viewed as finding a middle ground by partitioning the locations into subsets that share the same causal structures and pooling the observations from all the time series belonging to the same subset in order to learn more robust causal graphs. More precisely, we make use of relationships between locations (e.g. neighboring relationship) by defining a relational graph in which related locations are connected (note that this relational graph, which represents relationships among the different locations, is distinct from the causal graph, which represents causal relationships among the individual variables - e.g. temperature, pressure- within a multivariate time series). We then define a hidden Markov Random Field (hMRF), assigning a hidden state to each node (location), with the state assignment guided by the prior information encoded in the relational graph. Nodes that share the same state in the hMRF model will have the same causal graph. State assignment can thus shed light on unknown relations among locations (e.g. teleconnection). While the model has been described in terms of hard location partitioning to facilitate its exposition, in fact a soft partitioning is maintained throughout learning. This leads to a form of transfer learning, which makes our model applicable even in situations where partitioning the locations might not seem appropriate. We first validate the effectiveness of our methodology on synthetic datasets, and then apply it to actual climate measurement data. The experimental results show that our approach offers a useful alternative to the simulation-based approach for climate modeling and attribution, and has the capability to provide valuable scientific insights from a new perspective.
Conclusiveness of natural languages and recognition of images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojcik, Z.M.
1983-01-01
The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It ismore » suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.« less
An asynchronous traversal engine for graph-based rich metadata management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Carns, Philip; Ross, Robert B.
Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less
An asynchronous traversal engine for graph-based rich metadata management
Dai, Dong; Carns, Philip; Ross, Robert B.; ...
2016-06-23
Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less
Expanding our understanding of students' use of graphs for learning physics
NASA Astrophysics Data System (ADS)
Laverty, James T.
It is generally agreed that the ability to visualize functional dependencies or physical relationships as graphs is an important step in modeling and learning. However, several studies in Physics Education Research (PER) have shown that many students in fact do not master this form of representation and even have misconceptions about the meaning of graphs that impede learning physics concepts. Working with graphs in classroom settings has been shown to improve student abilities with graphs, particularly when the students can interact with them. We introduce a novel problem type in an online homework system, which requires students to construct the graphs themselves in free form, and requires no hand-grading by instructors. A study of pre/post-test data using the Test of Understanding Graphs in Kinematics (TUG-K) over several semesters indicates that students learn significantly more from these graph construction problems than from the usual graph interpretation problems, and that graph interpretation alone may not have any significant effect. The interpretation of graphs, as well as the representation translation between textual, mathematical, and graphical representations of physics scenarios, are frequently listed among the higher order thinking skills we wish to convey in an undergraduate course. But to what degree do we succeed? Do students indeed employ higher order thinking skills when working through graphing exercises? We investigate students working through a variety of graph problems, and, using a think-aloud protocol, aim to reconstruct the cognitive processes that the students go through. We find that to a certain degree, these problems become commoditized and do not trigger the desired higher order thinking processes; simply translating ``textbook-like'' problems into the graphical realm will not achieve any additional educational goals. Whether the students have to interpret or construct a graph makes very little difference in the methods used by the students. We will also look at the results of using graph problems in an online learning environment. We will show evidence that construction problems lead to a higher degree of difficulty and degree of discrimination than other graph problems and discuss the influence the course has on these variables.
From Many Records to One Graph: Heterogeneity Conflicts in the Linked Data Restructuring Cycle
ERIC Educational Resources Information Center
Tallerås, Kim
2013-01-01
Introduction: During the last couple of years the library community has developed a number of comprehensive metadata standardization projects inspired by the idea of linked data, such as the BIBFRAME model. Linked data is a set of best practice principles of publishing and exposing data on the Web utilizing a graph based data model powered with…
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Exotic equilibria of Harary graphs and a new minimum degree lower bound for synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canale, Eduardo A., E-mail: ecanale@pol.una.py; Monzón, Pablo, E-mail: monzon@fing.edu.uy
2015-02-15
This work is concerned with stability of equilibria in the homogeneous (equal frequencies) Kuramoto model of weakly coupled oscillators. In 2012 [R. Taylor, J. Phys. A: Math. Theor. 45, 1–15 (2012)], a sufficient condition for almost global synchronization was found in terms of the minimum degree–order ratio of the graph. In this work, a new lower bound for this ratio is given. The improvement is achieved by a concrete infinite sequence of regular graphs. Besides, non standard unstable equilibria of the graphs studied in Wiley et al. [Chaos 16, 015103 (2006)] are shown to exist as conjectured in that work.
Counting the number of Feynman graphs in QCD
NASA Astrophysics Data System (ADS)
Kaneko, T.
2018-05-01
Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.
Self-similarity analysis of eubacteria genome based on weighted graph.
Qi, Zhao-Hui; Li, Ling; Zhang, Zhi-Meng; Qi, Xiao-Qin
2011-07-07
We introduce a weighted graph model to investigate the self-similarity characteristics of eubacteria genomes. The regular treating in similarity comparison about genome is to discover the evolution distance among different genomes. Few people focus their attention on the overall statistical characteristics of each gene compared with other genes in the same genome. In our model, each genome is attributed to a weighted graph, whose topology describes the similarity relationship among genes in the same genome. Based on the related weighted graph theory, we extract some quantified statistical variables from the topology, and give the distribution of some variables derived from the largest social structure in the topology. The 23 eubacteria recently studied by Sorimachi and Okayasu are markedly classified into two different groups by their double logarithmic point-plots describing the similarity relationship among genes of the largest social structure in genome. The results show that the proposed model may provide us with some new sights to understand the structures and evolution patterns determined from the complete genomes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Unwinding the hairball graph: Pruning algorithms for weighted complex networks
NASA Astrophysics Data System (ADS)
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
Model validation of simple-graph representations of metabolism
Holme, Petter
2009-01-01
The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012
Graph configuration model based evaluation of the education-occupation match
2018-01-01
To study education—occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education. PMID:29509783
Graph configuration model based evaluation of the education-occupation match.
Gadar, Laszlo; Abonyi, Janos
2018-01-01
To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.
Graphical Language for Data Processing
NASA Technical Reports Server (NTRS)
Alphonso, Keith
2011-01-01
A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.
Observation of quantum criticality with ultracold atoms in optical lattices
NASA Astrophysics Data System (ADS)
Zhang, Xibo
As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.
Unimodular lattice triangulations as small-world and scale-free random graphs
NASA Astrophysics Data System (ADS)
Krüger, B.; Schmidt, E. M.; Mecke, K.
2015-02-01
Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.
Accelerating semantic graph databases on commodity clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Haglin, David J.
We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.
A software tool for dataflow graph scheduling
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1994-01-01
A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.
2015-09-21
this framework, MIT LL carried out a one-year proof- of-concept study to determine the capabilities and challenges in the detection of anomalies in...extremely large graphs [5]. Under this effort, two real datasets were considered, and algorithms for data modeling and anomaly detection were developed...is required in a well-defined experimental framework for the detection of anomalies in very large graphs. This study is intended to inform future
Evolution of a Modified Binomial Random Graph by Agglomeration
NASA Astrophysics Data System (ADS)
Kang, Mihyun; Pachon, Angelica; Rodríguez, Pablo M.
2018-02-01
In the classical Erdős-Rényi random graph G( n, p) there are n vertices and each of the possible edges is independently present with probability p. The random graph G( n, p) is homogeneous in the sense that all vertices have the same characteristics. On the other hand, numerous real-world networks are inhomogeneous in this respect. Such an inhomogeneity of vertices may influence the connection probability between pairs of vertices. The purpose of this paper is to propose a new inhomogeneous random graph model which is obtained in a constructive way from the Erdős-Rényi random graph G( n, p). Given a configuration of n vertices arranged in N subsets of vertices (we call each subset a super-vertex), we define a random graph with N super-vertices by letting two super-vertices be connected if and only if there is at least one edge between them in G( n, p). Our main result concerns the threshold for connectedness. We also analyze the phase transition for the emergence of the giant component and the degree distribution. Even though our model begins with G( n, p), it assumes the existence of some community structure encoded in the configuration. Furthermore, under certain conditions it exhibits a power law degree distribution. Both properties are important for real-world applications.
ERIC Educational Resources Information Center
Beeken, Paul
2014-01-01
Graphing is an essential skill that forms the foundation of any physical science. Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations. Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary…
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya
2011-12-01
We study the statistical behavior under random sequential renormalization (RSR) of several network models including Erdös-Rényi (ER) graphs, scale-free networks, and an annealed model related to ER graphs. In RSR the network is locally coarse grained by choosing at each renormalization step a node at random and joining it to all its neighbors. Compared to previous (quasi-)parallel renormalization methods [Song et al., Nature (London) 433, 392 (2005)], RSR allows a more fine-grained analysis of the renormalization group (RG) flow and unravels new features that were not discussed in the previous analyses. In particular, we find that all networks exhibit a second-order transition in their RG flow. This phase transition is associated with the emergence of a giant hub and can be viewed as a new variant of percolation, called agglomerative percolation. We claim that this transition exists also in previous graph renormalization schemes and explains some of the scaling behavior seen there. For critical trees it happens as N/N(0) → 0 in the limit of large systems (where N(0) is the initial size of the graph and N its size at a given RSR step). In contrast, it happens at finite N/N(0) in sparse ER graphs and in the annealed model, while it happens for N/N(0) → 1 on scale-free networks. Critical exponents seem to depend on the type of the graph but not on the average degree and obey usual scaling relations for percolation phenomena. For the annealed model they agree with the exponents obtained from a mean-field theory. At late times, the networks exhibit a starlike structure in agreement with the results of Radicchi et al. [Phys. Rev. Lett. 101, 148701 (2008)]. While degree distributions are of main interest when regarding the scheme as network renormalization, mass distributions (which are more relevant when considering "supernodes" as clusters) are much easier to study using the fast Newman-Ziff algorithm for percolation, allowing us to obtain very high statistics.
Meyer-Bäse, Anke; Roberts, Rodney G.; Illan, Ignacio A.; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja
2017-01-01
Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts. PMID:29051730
Meyer-Bäse, Anke; Roberts, Rodney G; Illan, Ignacio A; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja
2017-01-01
Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Homer; Ashok Varikuti; Xinming Ou
Various tools exist to analyze enterprise network systems and to produce attack graphs detailing how attackers might penetrate into the system. These attack graphs, however, are often complex and difficult to comprehend fully, and a human user may find it problematic to reach appropriate configuration decisions. This paper presents methodologies that can 1) automatically identify portions of an attack graph that do not help a user to understand the core security problems and so can be trimmed, and 2) automatically group similar attack steps as virtual nodes in a model of the network topology, to immediately increase the understandability ofmore » the data. We believe both methods are important steps toward improving visualization of attack graphs to make them more useful in configuration management for large enterprise networks. We implemented our methods using one of the existing attack-graph toolkits. Initial experimentation shows that the proposed approaches can 1) significantly reduce the complexity of attack graphs by trimming a large portion of the graph that is not needed for a user to understand the security problem, and 2) significantly increase the accessibility and understandability of the data presented in the attack graph by clearly showing, within a generated visualization of the network topology, the number and type of potential attacks to which each host is exposed.« less
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
Supervoxels for graph cuts-based deformable image registration using guided image filtering
NASA Astrophysics Data System (ADS)
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-11-01
We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.
Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A
2017-10-04
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-01-01
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433
Quantum gravity as an information network self-organization of a 4D universe
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-10-01
I propose a quantum gravity model in which the fundamental degrees of freedom are information bits for both discrete space-time points and links connecting them. The Hamiltonian is a very simple network model consisting of a ferromagnetic Ising model for space-time vertices and an antiferromagnetic Ising model for the links. As a result of the frustration between these two terms, the ground state self-organizes as a new type of low-clustering graph with finite Hausdorff dimension 4. The spectral dimension is lower than the Hausdorff dimension: it coincides with the Hausdorff dimension 4 at a first quantum phase transition corresponding to an IR fixed point, while at a second quantum phase transition describing small scales space-time dissolves into disordered information bits. The large-scale dimension 4 of the universe is related to the upper critical dimension 4 of the Ising model. At finite temperatures the universe graph emerges without a big bang and without singularities from a ferromagnetic phase transition in which space-time itself forms out of a hot soup of information bits. When the temperature is lowered the universe graph unfolds and expands by lowering its connectivity, a mechanism I have called topological expansion. The model admits topological black hole excitations corresponding to graphs containing holes with no space-time inside and with "Schwarzschild-like" horizons with a lower spectral dimension.
Entropy of spatial network ensembles
NASA Astrophysics Data System (ADS)
Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis
2018-04-01
We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.
Teaching Physics with Basketball
NASA Astrophysics Data System (ADS)
Chanpichai, N.; Wattanakasiwich, P.
2010-07-01
Recently, technologies and computer takes important roles in learning and teaching, including physics. Advance in technologies can help us better relating physics taught in the classroom to the real world. In this study, we developed a module on teaching a projectile motion through shooting a basketball. Students learned about physics of projectile motion, and then they took videos of their classmates shooting a basketball by using the high speed camera. Then they analyzed videos by using Tracker, a video analysis and modeling tool. While working with Tracker, students learned about the relationships between three kinematics graphs. Moreover, they learned about a real projectile motion (with an air resistance) through modeling tools. Students' abilities to interpret kinematics graphs were investigated before and after the instruction by using the Test of Understanding Graphs in Kinematics (TUG-K). The maximum normalized gain or
Graph theory findings in the pathophysiology of temporal lobe epilepsy
Chiang, Sharon; Haneef, Zulfi
2014-01-01
Temporal lobe epilepsy (TLE) is the most common form of adult epilepsy. Accumulating evidence has shown that TLE is a disorder of abnormal epileptogenic networks, rather than focal sources. Graph theory allows for a network-based representation of TLE brain networks, and has potential to illuminate characteristics of brain topology conducive to TLE pathophysiology, including seizure initiation and spread. We review basic concepts which we believe will prove helpful in interpreting results rapidly emerging from graph theory research in TLE. In addition, we summarize the current state of graph theory findings in TLE as they pertain its pathophysiology. Several common findings have emerged from the many modalities which have been used to study TLE using graph theory, including structural MRI, diffusion tensor imaging, surface EEG, intracranial EEG, magnetoencephalography, functional MRI, cell cultures, simulated models, and mouse models, involving increased regularity of the interictal network configuration, altered local segregation and global integration of the TLE network, and network reorganization of temporal lobe and limbic structures. As different modalities provide different views of the same phenomenon, future studies integrating data from multiple modalities are needed to clarify findings and contribute to the formation of a coherent theory on the pathophysiology of TLE. PMID:24831083
AGM: A DSL for mobile cloud computing based on directed graph
NASA Astrophysics Data System (ADS)
Tanković, Nikola; Grbac, Tihana Galinac
2016-06-01
This paper summarizes a novel approach for consuming a domain specific language (DSL) by transforming it to a directed graph representation persisted by a graph database. Using such specialized database enables advanced navigation trough the stored model exposing only relevant subsets of meta-data to different involved services and components. We applied this approach in a mobile cloud computing system and used it to model several mobile applications in retail, supply chain management and merchandising domain. These application are distributed in a Software-as-a-Service (SaaS) fashion and used by thousands of customers in Croatia. We report on lessons learned and propose further research on this topic.
Nelson, Carl A; Miller, David J; Oleynikov, Dmitry
2008-01-01
As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.
ERIC Educational Resources Information Center
Pennington, Robert; Koehler, Mallory
2017-01-01
There is limited research on teaching narrative writing skills to students with moderate to severe intellectual disability. In the current study, we used a multiple probe across participants single case design to evaluate the effects of an intervention package comprised of modeling, story templates, and self-graphing, on the inclusion of story…
Graphs to estimate an individualized risk of breast cancer.
Benichou, J; Gail, M H; Mulvihill, J J
1996-01-01
Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
Synchronizability of random rectangular graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
Analysis of quantum error correction with symmetric hypergraph states
NASA Astrophysics Data System (ADS)
Wagner, T.; Kampermann, H.; Bruß, D.
2018-03-01
Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.
GraphMeta: Managing HPC Rich Metadata in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Chen, Yong; Carns, Philip
High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less
Bounds for percolation thresholds on directed and undirected graphs
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen; Pryadko, Leonid
2015-03-01
Percolation theory is an efficient approach to problems with strong disorder, e.g., in quantum or classical transport, composite materials, and diluted magnets. Recently, the growing role of big data in scientific and industrial applications has led to a renewed interest in graph theory as a tool for describing complex connections in various kinds of networks: social, biological, technological, etc. In particular, percolation on graphs has been used to describe internet stability, spread of contagious diseases and computer viruses; related models describe market crashes and viral spread in social networks. We consider site-dependent percolation on directed and undirected graphs, and present several exact bounds for location of the percolation transition in terms of the eigenvalues of matrices associated with graphs, including the adjacency matrix and the Hashimoto matrix used to enumerate non-backtracking walks. These bounds correspond t0 a mean field approximation and become asymptotically exact for graphs with no short cycles. We illustrate this convergence numerically by simulating percolation on several families of graphs with different cycle lengths. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
The Replicator Equation on Graphs
Ohtsuki, Hisashi; Nowak, Martin A.
2008-01-01
We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth-death’, ‘death-birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth-death updating in our model. We use pair-approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner’s Dilemma, the Snow-Drift game, a coordination game and the Rock-Scissors-Paper game. PMID:16860343
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
Automated visualization of rule-based models
Tapia, Jose-Juan; Faeder, James R.
2017-01-01
Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816
Topology polymorphism graph for lung tumor segmentation in PET-CT images.
Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Eberl, Stefan; Yin, Yong; Feng, Dagan; Fulham, Michael
2015-06-21
Accurate lung tumor segmentation is problematic when the tumor boundary or edge, which reflects the advancing edge of the tumor, is difficult to discern on chest CT or PET. We propose a 'topo-poly' graph model to improve identification of the tumor extent. Our model incorporates an intensity graph and a topology graph. The intensity graph provides the joint PET-CT foreground similarity to differentiate the tumor from surrounding tissues. The topology graph is defined on the basis of contour tree to reflect the inclusion and exclusion relationship of regions. By taking into account different topology relations, the edges in our model exhibit topological polymorphism. These polymorphic edges in turn affect the energy cost when crossing different topology regions under a random walk framework, and hence contribute to appropriate tumor delineation. We validated our method on 40 patients with non-small cell lung cancer where the tumors were manually delineated by a clinical expert. The studies were separated into an 'isolated' group (n = 20) where the lung tumor was located in the lung parenchyma and away from associated structures / tissues in the thorax and a 'complex' group (n = 20) where the tumor abutted / involved a variety of adjacent structures and had heterogeneous FDG uptake. The methods were validated using Dice's similarity coefficient (DSC) to measure the spatial volume overlap and Hausdorff distance (HD) to compare shape similarity calculated as the maximum surface distance between the segmentation results and the manual delineations. Our method achieved an average DSC of 0.881 ± 0.046 and HD of 5.311 ± 3.022 mm for the isolated cases and DSC of 0.870 ± 0.038 and HD of 9.370 ± 3.169 mm for the complex cases. Student's t-test showed that our model outperformed the other methods (p-values <0.05).
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
2011-01-01
Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854
Signalling Network Construction for Modelling Plant Defence Response
Miljkovic, Dragana; Stare, Tjaša; Mozetič, Igor; Podpečan, Vid; Petek, Marko; Witek, Kamil; Dermastia, Marina; Lavrač, Nada; Gruden, Kristina
2012-01-01
Plant defence signalling response against various pathogens, including viruses, is a complex phenomenon. In resistant interaction a plant cell perceives the pathogen signal, transduces it within the cell and performs a reprogramming of the cell metabolism leading to the pathogen replication arrest. This work focuses on signalling pathways crucial for the plant defence response, i.e., the salicylic acid, jasmonic acid and ethylene signal transduction pathways, in the Arabidopsis thaliana model plant. The initial signalling network topology was constructed manually by defining the representation formalism, encoding the information from public databases and literature, and composing a pathway diagram. The manually constructed network structure consists of 175 components and 387 reactions. In order to complement the network topology with possibly missing relations, a new approach to automated information extraction from biological literature was developed. This approach, named Bio3graph, allows for automated extraction of biological relations from the literature, resulting in a set of (component1, reaction, component2) triplets and composing a graph structure which can be visualised, compared to the manually constructed topology and examined by the experts. Using a plant defence response vocabulary of components and reaction types, Bio3graph was applied to a set of 9,586 relevant full text articles, resulting in 137 newly detected reactions between the components. Finally, the manually constructed topology and the new reactions were merged to form a network structure consisting of 175 components and 524 reactions. The resulting pathway diagram of plant defence signalling represents a valuable source for further computational modelling and interpretation of omics data. The developed Bio3graph approach, implemented as an executable language processing and graph visualisation workflow, is publically available at http://ropot.ijs.si/bio3graph/and can be utilised for modelling other biological systems, given that an adequate vocabulary is provided. PMID:23272172
Assessment of tautomer distribution using the condensed reaction graph approach
NASA Astrophysics Data System (ADS)
Gimadiev, T. R.; Madzhidov, T. I.; Nugmanov, R. I.; Baskin, I. I.; Antipin, I. S.; Varnek, A.
2018-03-01
We report the first direct QSPR modeling of equilibrium constants of tautomeric transformations (logK T ) in different solvents and at different temperatures, which do not require intermediate assessment of acidity (basicity) constants for all tautomeric forms. The key step of the modeling consisted in the merging of two tautomers in one sole molecular graph ("condensed reaction graph") which enables to compute molecular descriptors characterizing entire equilibrium. The support vector regression method was used to build the models. The training set consisted of 785 transformations belonging to 11 types of tautomeric reactions with equilibrium constants measured in different solvents and at different temperatures. The models obtained perform well both in cross-validation (Q2 = 0.81 RMSE = 0.7 logK T units) and on two external test sets. Benchmarking studies demonstrate that our models outperform results obtained with DFT B3LYP/6-311 ++ G(d,p) and ChemAxon Tautomerizer applicable only in water at room temperature.
Modeling and optimum time performance for concurrent processing
NASA Technical Reports Server (NTRS)
Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy
1988-01-01
The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-05-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-06-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Voss, Frank D.; Mastin, Mark C.
2012-01-01
A database was developed to automate model execution and to provide users with Internet access to voluminous data products ranging from summary figures to model output timeseries. Database-enabled Internet tools were developed to allow users to create interactive graphs of output results based on their analysis needs. For example, users were able to create graphs by selecting time intervals, greenhouse gas emission scenarios, general circulation models, and specific hydrologic variables.
Topological structure of dictionary graphs
NASA Astrophysics Data System (ADS)
Fukś, Henryk; Krzemiński, Mark
2009-09-01
We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers—that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 103 and 104. A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.
As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which themore » ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;« less
Learning molecular energies using localized graph kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Exploring the evolution of London's street network in the information space: A dual approach
NASA Astrophysics Data System (ADS)
Masucci, A. Paolo; Stanilov, Kiril; Batty, Michael
2014-01-01
We study the growth of London's street network in its dual representation, as the city has evolved over the past 224 years. The dual representation of a planar graph is a content-based network, where each node is a set of edges of the planar graph and represents a transportation unit in the so-called information space, i.e., the space where information is handled in order to navigate through the city. First, we discuss a novel hybrid technique to extract dual graphs from planar graphs, called the hierarchical intersection continuity negotiation principle. Then we show that the growth of the network can be analytically described by logistic laws and that the topological properties of the network are governed by robust log-normal distributions characterizing the network's connectivity and small-world properties that are consistent over time. Moreover, we find that the double-Pareto-like distributions for the connectivity emerge for major roads and can be modeled via a stochastic content-based network model using simple space-filling principles.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-01
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-05
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
A preliminary study on atrial epicardial mapping signals based on Graph Theory.
Sun, Liqian; Yang, Cuiwei; Zhang, Lin; Chen, Ying; Wu, Zhong; Shao, Jun
2014-07-01
In order to get a better understanding of atrial fibrillation, we introduced a method based on Graph Theory to interpret the relations of different parts of the atria. Atrial electrograms under sinus rhythm and atrial fibrillation were collected from eight living mongrel dogs with cholinergic AF model. These epicardial signals were acquired from 95 unipolar electrodes attached to the surface of the atria and four pulmonary veins. Then, we analyzed the electrode correlations using Graph Theory. The topology, the connectivity and the parameters of graphs during different rhythms were studied. Our results showed that the connectivity of graphs varied from sinus rhythm to atrial fibrillation and there were parameter gradients in various parts of the atria. The results provide spatial insight into the interaction between different parts of the atria and the method may have its potential for studying atrial fibrillation. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Projected power iteration for network alignment
NASA Astrophysics Data System (ADS)
Onaran, Efe; Villar, Soledad
2017-08-01
The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erd-s-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.
Chasin, Rachel; Rumshisky, Anna; Uzuner, Ozlem; Szolovits, Peter
2014-01-01
Objective To evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources. Materials and methods The graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation. Results The topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40–50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help. Discussion Although topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words. Conclusions Topic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited. PMID:24441986
Entropy, complexity, and Markov diagrams for random walk cancer models
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-01-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357
GeoSciGraph: An Ontological Framework for EarthCube Semantic Infrastructure
NASA Astrophysics Data System (ADS)
Gupta, A.; Schachne, A.; Condit, C.; Valentine, D.; Richard, S.; Zaslavsky, I.
2015-12-01
The CINERGI (Community Inventory of EarthCube Resources for Geosciences Interoperability) project compiles an inventory of a wide variety of earth science resources including documents, catalogs, vocabularies, data models, data services, process models, information repositories, domain-specific ontologies etc. developed by research groups and data practitioners. We have developed a multidisciplinary semantic framework called GeoSciGraph semantic ingration of earth science resources. An integrated ontology is constructed with Basic Formal Ontology (BFO) as its upper ontology and currently ingests multiple component ontologies including the SWEET ontology, GeoSciML's lithology ontology, Tematres controlled vocabulary server, GeoNames, GCMD vocabularies on equipment, platforms and institutions, software ontology, CUAHSI hydrology vocabulary, the environmental ontology (ENVO) and several more. These ontologies are connected through bridging axioms; GeoSciGraph identifies lexically close terms and creates equivalence class or subclass relationships between them after human verification. GeoSciGraph allows a community to create community-specific customizations of the integrated ontology. GeoSciGraph uses the Neo4J,a graph database that can hold several billion concepts and relationships. GeoSciGraph provides a number of REST services that can be called by other software modules like the CINERGI information augmentation pipeline. 1) Vocabulary services are used to find exact and approximate terms, term categories (community-provided clusters of terms e.g., measurement-related terms or environmental material related terms), synonyms, term definitions and annotations. 2) Lexical services are used for text parsing to find entities, which can then be included into the ontology by a domain expert. 3) Graph services provide the ability to perform traversal centric operations e.g., finding paths and neighborhoods which can be used to perform ontological operations like computing transitive closure (e.g., finding all subclasses of rocks). 4) Annotation services are used to adorn an arbitrary block of text (e.g., from a NOAA catalog record) with ontology terms. The system has been used to ontologically integrate diverse sources like Science-base, NOAA records, PETDB.
NASA Astrophysics Data System (ADS)
Debnath, Lokenath
2010-09-01
This article is essentially devoted to a brief historical introduction to Euler's formula for polyhedra, topology, theory of graphs and networks with many examples from the real-world. Celebrated Königsberg seven-bridge problem and some of the basic properties of graphs and networks for some understanding of the macroscopic behaviour of real physical systems are included. We also mention some important and modern applications of graph theory or network problems from transportation to telecommunications. Graphs or networks are effectively used as powerful tools in industrial, electrical and civil engineering, communication networks in the planning of business and industry. Graph theory and combinatorics can be used to understand the changes that occur in many large and complex scientific, technical and medical systems. With the advent of fast large computers and the ubiquitous Internet consisting of a very large network of computers, large-scale complex optimization problems can be modelled in terms of graphs or networks and then solved by algorithms available in graph theory. Many large and more complex combinatorial problems dealing with the possible arrangements of situations of various kinds, and computing the number and properties of such arrangements can be formulated in terms of networks. The Knight's tour problem, Hamilton's tour problem, problem of magic squares, the Euler Graeco-Latin squares problem and their modern developments in the twentieth century are also included.
Using Graph Indices for the Analysis and Comparison of Chemical Datasets.
Fourches, Denis; Tropsha, Alexander
2013-10-01
In cheminformatics, compounds are represented as points in multidimensional space of chemical descriptors. When all pairs of points found within certain distance threshold in the original high dimensional chemistry space are connected by distance-labeled edges, the resulting data structure can be defined as Dataset Graph (DG). We show that, similarly to the conventional description of organic molecules, many graph indices can be computed for DGs as well. We demonstrate that chemical datasets can be effectively characterized and compared by computing simple graph indices such as the average vertex degree or Randic connectivity index. This approach is used to characterize and quantify the similarity between different datasets or subsets of the same dataset (e.g., training, test, and external validation sets used in QSAR modeling). The freely available ADDAGRA program has been implemented to build and visualize DGs. The approach proposed and discussed in this report could be further explored and utilized for different cheminformatics applications such as dataset diversification by acquiring external compounds, dataset processing prior to QSAR modeling, or (dis)similarity modeling of multiple datasets studied in chemical genomics applications. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
NASA Astrophysics Data System (ADS)
Jin, Hao; Xu, Rui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2017-10-01
As to support the mission of Mars exploration in China, automated mission planning is required to enhance security and robustness of deep space probe. Deep space mission planning requires modeling of complex operations constraints and focus on the temporal state transitions of involved subsystems. Also, state transitions are ubiquitous in physical systems, but have been elusive for knowledge description. We introduce a modeling approach to cope with these difficulties that takes state transitions into consideration. The key technique we build on is the notion of extended states and state transition graphs. Furthermore, a heuristics that based on state transition graphs is proposed to avoid redundant work. Finally, we run comprehensive experiments on selected domains and our techniques present an excellent performance.
Dynamic graph cuts for efficient inference in Markov Random Fields.
Kohli, Pushmeet; Torr, Philip H S
2007-12-01
Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.
Exact numerical calculation of fixation probability and time on graphs.
Hindersin, Laura; Möller, Marius; Traulsen, Arne; Bauer, Benedikt
2016-12-01
The Moran process on graphs is a popular model to study the dynamics of evolution in a spatially structured population. Exact analytical solutions for the fixation probability and time of a new mutant have been found for only a few classes of graphs so far. Simulations are time-expensive and many realizations are necessary, as the variance of the fixation times is high. We present an algorithm that numerically computes these quantities for arbitrary small graphs by an approach based on the transition matrix. The advantage over simulations is that the calculation has to be executed only once. Building the transition matrix is automated by our algorithm. This enables a fast and interactive study of different graph structures and their effect on fixation probability and time. We provide a fast implementation in C with this note (Hindersin et al., 2016). Our code is very flexible, as it can handle two different update mechanisms (Birth-death or death-Birth), as well as arbitrary directed or undirected graphs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Graph partitions and cluster synchronization in networks of oscillators
Schaub, Michael T.; O’Clery, Neave; Billeh, Yazan N.; Delvenne, Jean-Charles; Lambiotte, Renaud; Barahona, Mauricio
2017-01-01
Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators. PMID:27781454
Khakzad, Nima; Landucci, Gabriele; Reniers, Genserik
2017-09-01
In the present study, we have introduced a methodology based on graph theory and multicriteria decision analysis for cost-effective fire protection of chemical plants subject to fire-induced domino effects. By modeling domino effects in chemical plants as a directed graph, the graph centrality measures such as out-closeness and betweenness scores can be used to identify the installations playing a key role in initiating and propagating potential domino effects. It is demonstrated that active fire protection of installations with the highest out-closeness score and passive fire protection of installations with the highest betweenness score are the most effective strategies for reducing the vulnerability of chemical plants to fire-induced domino effects. We have employed a dynamic graph analysis to investigate the impact of both the availability and the degradation of fire protection measures over time on the vulnerability of chemical plants. The results obtained from the graph analysis can further be prioritized using multicriteria decision analysis techniques such as the method of reference point to find the most cost-effective fire protection strategy. © 2016 Society for Risk Analysis.
The combination of direct and paired link graphs can boost repetitive genome assembly
Shi, Wenyu; Ji, Peifeng
2017-01-01
Abstract Currently, most paired link based scaffolding algorithms intrinsically mask the sequences between two linked contigs and bypass their direct link information embedded in the original de Bruijn assembly graph. Such disadvantage substantially complicates the scaffolding process and leads to the inability of resolving repetitive contig assembly. Here we present a novel algorithm, inGAP-sf, for effectively generating high-quality and continuous scaffolds. inGAP-sf achieves this by using a new strategy based on the combination of direct link and paired link graphs, in which direct link is used to increase graph connectivity and to decrease graph complexity and paired link is employed to supervise the traversing process on the direct link graph. Such advantage greatly facilitates the assembly of short-repeat enriched regions. Moreover, a new comprehensive decision model is developed to eliminate the noise routes accompanying with the introduced direct link. Through extensive evaluations on both simulated and real datasets, we demonstrated that inGAP-sf outperforms most of the genome scaffolding algorithms by generating more accurate and continuous assembly, especially for short repetitive regions. PMID:27924003
Zero, Victoria H.; Barocas, Adi; Jochimsen, Denim M.; Pelletier, Agnès; Giroux-Bougard, Xavier; Trumbo, Daryl R.; Castillo, Jessica A.; Evans Mack, Diane; Linnell, Mark A.; Pigg, Rachel M.; Hoisington-Lopez, Jessica; Spear, Stephen F.; Murphy, Melanie A.; Waits, Lisette P.
2017-01-01
The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel (Urocitellus brunneus) and the southern Idaho ground squirrel (U. endemicus), two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads) and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models) suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions. PMID:28659969
Zero, Victoria H; Barocas, Adi; Jochimsen, Denim M; Pelletier, Agnès; Giroux-Bougard, Xavier; Trumbo, Daryl R; Castillo, Jessica A; Evans Mack, Diane; Linnell, Mark A; Pigg, Rachel M; Hoisington-Lopez, Jessica; Spear, Stephen F; Murphy, Melanie A; Waits, Lisette P
2017-01-01
The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel ( Urocitellus brunneus) and the southern Idaho ground squirrel ( U. endemicus ), two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads) and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models) suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions.
Renal cortex segmentation using optimal surface search with novel graph construction.
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2011-01-01
In this paper, we propose a novel approach to solve the renal cortex segmentation problem, which has rarely been studied. In this study, the renal cortex segmentation problem is handled as a multiple-surfaces extraction problem, which is solved using the optimal surface search method. We propose a novel graph construction scheme in the optimal surface search to better accommodate multiple surfaces. Different surface sub-graphs are constructed according to their properties, and inter-surface relationships are also modeled in the graph. The proposed method was tested on 17 clinical CT datasets. The true positive volume fraction (TPVF) and false positive volume fraction (FPVF) are 74.10% and 0.08%, respectively. The experimental results demonstrate the effectiveness of the proposed method.
SemaTyP: a knowledge graph based literature mining method for drug discovery.
Sang, Shengtian; Yang, Zhihao; Wang, Lei; Liu, Xiaoxia; Lin, Hongfei; Wang, Jian
2018-05-30
Drug discovery is the process through which potential new medicines are identified. High-throughput screening and computer-aided drug discovery/design are the two main drug discovery methods for now, which have successfully discovered a series of drugs. However, development of new drugs is still an extremely time-consuming and expensive process. Biomedical literature contains important clues for the identification of potential treatments. It could support experts in biomedicine on their way towards new discoveries. Here, we propose a biomedical knowledge graph-based drug discovery method called SemaTyP, which discovers candidate drugs for diseases by mining published biomedical literature. We first construct a biomedical knowledge graph with the relations extracted from biomedical abstracts, then a logistic regression model is trained by learning the semantic types of paths of known drug therapies' existing in the biomedical knowledge graph, finally the learned model is used to discover drug therapies for new diseases. The experimental results show that our method could not only effectively discover new drug therapies for new diseases, but also could provide the potential mechanism of action of the candidate drugs. In this paper we propose a novel knowledge graph based literature mining method for drug discovery. It could be a supplementary method for current drug discovery methods.
Graph theory findings in the pathophysiology of temporal lobe epilepsy.
Chiang, Sharon; Haneef, Zulfi
2014-07-01
Temporal lobe epilepsy (TLE) is the most common form of adult epilepsy. Accumulating evidence has shown that TLE is a disorder of abnormal epileptogenic networks, rather than focal sources. Graph theory allows for a network-based representation of TLE brain networks, and has potential to illuminate characteristics of brain topology conducive to TLE pathophysiology, including seizure initiation and spread. We review basic concepts which we believe will prove helpful in interpreting results rapidly emerging from graph theory research in TLE. In addition, we summarize the current state of graph theory findings in TLE as they pertain its pathophysiology. Several common findings have emerged from the many modalities which have been used to study TLE using graph theory, including structural MRI, diffusion tensor imaging, surface EEG, intracranial EEG, magnetoencephalography, functional MRI, cell cultures, simulated models, and mouse models, involving increased regularity of the interictal network configuration, altered local segregation and global integration of the TLE network, and network reorganization of temporal lobe and limbic structures. As different modalities provide different views of the same phenomenon, future studies integrating data from multiple modalities are needed to clarify findings and contribute to the formation of a coherent theory on the pathophysiology of TLE. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Quantifying loopy network architectures.
Katifori, Eleni; Magnasco, Marcelo O
2012-01-01
Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.
Zhang, Qin
2015-07-01
Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.
Modelling Chemical Reasoning to Predict and Invent Reactions.
Segler, Marwin H S; Waller, Mark P
2017-05-02
The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Moore-Russo, Deborah A.; Cortes-Figueroa, Jose E.; Schuman, Michael J.
2006-01-01
The use of Calculator-Based Laboratory (CBL) technology, the graphing calculator, and the cooling and heating of water to model the behavior of consecutive first-order reactions is presented, where B is the reactant, I is the intermediate, and P is the product for an in-class demonstration. The activity demonstrates the spontaneous and consecutive…
A Scalable Distributed Syntactic, Semantic, and Lexical Language Model
2012-09-01
Here pa(τ) denotes the set of parent states of τ. If the recursive factorization refers to a graph , then we have a Bayesian network (Lauritzen 1996...Broadly speaking, however, the recursive factorization can refer to a representation more complicated than a graph with a fixed set of nodes and edges...factored language (FL) model (Bilmes and Kirchhoff 2003) is close to the smoothing technique we propose here, the major difference is that FL
GraDit: graph-based data repair algorithm for multiple data edits rule violations
NASA Astrophysics Data System (ADS)
Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.
2018-03-01
Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.
Exactly solved models on planar graphs with vertices in {Z}^3
NASA Astrophysics Data System (ADS)
Kels, Andrew P.
2017-12-01
It is shown how exactly solved edge interaction models on the square lattice, may be extended onto more general planar graphs, with edges connecting a subset of next nearest neighbour vertices of {Z}3 . This is done by using local deformations of the square lattice, that arise through the use of the star-triangle relation. Similar to Baxter’s Z-invariance property, these local deformations leave the partition function invariant up to some simple factors coming from the star-triangle relation. The deformations used here extend the usual formulation of Z-invariance, by requiring the introduction of oriented rapidity lines which form directed closed paths in the rapidity graph of the model. The quasi-classical limit is also considered, in which case the deformations imply a classical Z-invariance property, as well as a related local closure relation, for the action functional of a system of classical discrete Laplace equations.
Human connectome module pattern detection using a new multi-graph MinMax cut model.
De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng
2014-01-01
Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
Linear finite-difference bond graph model of an ionic polymer actuator
NASA Astrophysics Data System (ADS)
Bentefrit, M.; Grondel, S.; Soyer, C.; Fannir, A.; Cattan, E.; Madden, J. D.; Nguyen, T. M. G.; Plesse, C.; Vidal, F.
2017-09-01
With the recent growing interest for soft actuation, many new types of ionic polymers working in air have been developed. Due to the interrelated mechanical, electrical, and chemical properties which greatly influence the characteristics of such actuators, their behavior is complex and difficult to understand, predict and optimize. In light of this challenge, an original linear multiphysics finite difference bond graph model was derived to characterize this ionic actuation. This finite difference scheme was divided into two coupled subparts, each related to a specific physical, electrochemical or mechanical domain, and then converted into a bond graph model as this language is particularly suited for systems from multiple energy domains. Simulations were then conducted and a good agreement with the experimental results was obtained. Furthermore, an analysis of the power efficiency of such actuators as a function of space and time was proposed and allowed to evaluate their performance.
Ni, Hui; He, Guo-qing; Ruan, Hui; Chen, Qi-he; Chen, Feng
2005-01-01
A derivative ratio spectrophotometric method was used for the simultaneous determination of β-carotene and astaxanthin produced from Phaffia rhodozyma. Absorbencies of a series of the standard carotenoids in the range of 441 nm to 490 nm demonstrated that their absorptive spectra accorded with Beer’s law and that the additivity when the concentrations of β-carotene and astaxanthin and their mixture were within the range of 0 to 5 µg/ml, 0 to 6 µg/ml, and 0 to 6 µg/ml, respectively. When the wavelength interval (Δλ) at 2 nm was selected to calculate the first derivative ratio spectra values, the first derivative amplitudes at 461 nm and 466 nm were suitable for quantitatively determining β-carotene and astaxanthin, respectively. Effect of divisor on derivative ratio spectra could be neglected; any concentration used as divisor in range of 1.0 to 4.0 µg/ml is ideal for calculating the derivative ratio spectra values of the two carotenoids. Calibration graphs were established for β-carotene within 0–6.0 µg/ml and for astaxanthin within 0–5.0 µg/ml with their corresponding regressive equations in: y=−0.0082x−0.0002 and y=0.0146x−0.0006, respectively. R-square values in excess of 0.999 indicated the good linearity of the calibration graphs. Sample recovery rates were found satisfactory (>99%) with relative standard deviations (RSD) of less than 5%. This method was successfully applied to simultaneous determination of β-carotene and astaxanthin in the laboratory-prepared mixtures and the extract from the Phaffia rhodozyma culture. PMID:15909336
Local Difference Measures between Complex Networks for Dynamical System Model Evaluation
Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374
Local difference measures between complex networks for dynamical system model evaluation.
Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.
Isomorphisms between Petri nets and dataflow graphs
NASA Technical Reports Server (NTRS)
Kavi, Krishna M.; Buckles, Billy P.; Bhat, U. Narayan
1987-01-01
Dataflow graphs are a generalized model of computation. Uninterpreted dataflow graphs with nondeterminism resolved via probabilities are shown to be isomorphic to a class of Petri nets known as free choice nets. Petri net analysis methods are readily available in the literature and this result makes those methods accessible to dataflow research. Nevertheless, combinatorial explosion can render Petri net analysis inoperative. Using a previously known technique for decomposing free choice nets into smaller components, it is demonstrated that, in principle, it is possible to determine aspects of the overall behavior from the particular behavior of components.
Emergence of a spectral gap in a class of random matrices associated with split graphs
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Zia, R. K. P.
2018-01-01
Motivated by the intriguing behavior displayed in a dynamic network that models a population of extreme introverts and extroverts (XIE), we consider the spectral properties of ensembles of random split graph adjacency matrices. We discover that, in general, a gap emerges in the bulk spectrum between -1 and 0 that contains a single eigenvalue. An analytic expression for the bulk distribution is derived and verified with numerical analysis. We also examine their relation to chiral ensembles, which are associated with bipartite graphs.
Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.
Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T
2013-01-01
Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.
Saund, Eric
2013-10-01
Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.
Classification of Domain Movements in Proteins Using Dynamic Contact Graphs
Taylor, Daniel; Cawley, Gavin; Hayward, Steven
2013-01-01
A new method for the classification of domain movements in proteins is described and applied to 1822 pairs of structures from the Protein Data Bank that represent a domain movement in two-domain proteins. The method is based on changes in contacts between residues from the two domains in moving from one conformation to the other. We argue that there are five types of elemental contact changes and that these relate to five model domain movements called: “free”, “open-closed”, “anchored”, “sliding-twist”, and “see-saw.” A directed graph is introduced called the “Dynamic Contact Graph” which represents the contact changes in a domain movement. In many cases a graph, or part of a graph, provides a clear visual metaphor for the movement it represents and is a motif that can be easily recognised. The Dynamic Contact Graphs are often comprised of disconnected subgraphs indicating independent regions which may play different roles in the domain movement. The Dynamic Contact Graph for each domain movement is decomposed into elemental Dynamic Contact Graphs, those that represent elemental contact changes, allowing us to count the number of instances of each type of elemental contact change in the domain movement. This naturally leads to sixteen classes into which the 1822 domain movements are classified. PMID:24260562
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
Model predictive control of P-time event graphs
NASA Astrophysics Data System (ADS)
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
Protein Inference from the Integration of Tandem MS Data and Interactome Networks.
Zhong, Jiancheng; Wang, Jianxing; Ding, Xiaojun; Zhang, Zhen; Li, Min; Wu, Fang-Xiang; Pan, Yi
2017-01-01
Since proteins are digested into a mixture of peptides in the preprocessing step of tandem mass spectrometry (MS), it is difficult to determine which specific protein a shared peptide belongs to. In recent studies, besides tandem MS data and peptide identification information, some other information is exploited to infer proteins. Different from the methods which first use only tandem MS data to infer proteins and then use network information to refine them, this study proposes a protein inference method named TMSIN, which uses interactome networks directly. As two interacting proteins should co-exist, it is reasonable to assume that if one of the interacting proteins is confidently inferred in a sample, its interacting partners should have a high probability in the same sample, too. Therefore, we can use the neighborhood information of a protein in an interactome network to adjust the probability that the shared peptide belongs to the protein. In TMSIN, a multi-weighted graph is constructed by incorporating the bipartite graph with interactome network information, where the bipartite graph is built with the peptide identification information. Based on multi-weighted graphs, TMSIN adopts an iterative workflow to infer proteins. At each iterative step, the probability that a shared peptide belongs to a specific protein is calculated by using the Bayes' law based on the neighbor protein support scores of each protein which are mapped by the shared peptides. We carried out experiments on yeast data and human data to evaluate the performance of TMSIN in terms of ROC, q-value, and accuracy. The experimental results show that AUC scores yielded by TMSIN are 0.742 and 0.874 in yeast dataset and human dataset, respectively, and TMSIN yields the maximum number of true positives when q-value less than or equal to 0.05. The overlap analysis shows that TMSIN is an effective complementary approach for protein inference.
MadDM: Computation of dark matter relic abundance
NASA Astrophysics Data System (ADS)
Backović, Mihailo; Kong, Kyoungchul; McCaskey, Mathew
2017-12-01
MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.
Fault management for data systems
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann
1993-01-01
Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph
2014-07-01
distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more
Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph
NASA Astrophysics Data System (ADS)
Xue, Xiaofeng
2017-11-01
In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.
Prediction of Nucleotide Binding Peptides Using Star Graph Topological Indices.
Liu, Yong; Munteanu, Cristian R; Fernández Blanco, Enrique; Tan, Zhiliang; Santos Del Riego, Antonino; Pazos, Alejandro
2015-11-01
The nucleotide binding proteins are involved in many important cellular processes, such as transmission of genetic information or energy transfer and storage. Therefore, the screening of new peptides for this biological function is an important research topic. The current study proposes a mixed methodology to obtain the first classification model that is able to predict new nucleotide binding peptides, using only the amino acid sequence. Thus, the methodology uses a Star graph molecular descriptor of the peptide sequences and the Machine Learning technique for the best classifier. The best model represents a Random Forest classifier based on two features of the embedded and non-embedded graphs. The performance of the model is excellent, considering similar models in the field, with an Area Under the Receiver Operating Characteristic Curve (AUROC) value of 0.938 and true positive rate (TPR) of 0.886 (test subset). The prediction of new nucleotide binding peptides with this model could be useful for drug target studies in drug development. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Cagande, Jeffrey Lloyd L.; Jugar, Richard R.
2018-01-01
Reversing the traditional classroom activities, in the flipped classroom model students view lectures at home and perform activities during class period inside the classroom. This study investigated the effect of a flipped classroom implementation on college physics students' motivation and understanding of kinematics graphs. A Solomon four-group…
ERIC Educational Resources Information Center
Prieto, L. P.; Sharma, K.; Kidzinski, L.; Rodríguez-Triana, M. J.; Dillenbourg, P.
2018-01-01
The pedagogical modelling of everyday classroom practice is an interesting kind of evidence, both for educational research and teachers' own professional development. This paper explores the usage of wearable sensors and machine learning techniques to automatically extract orchestration graphs (teaching activities and their social plane over time)…
Mining and Modeling Real-World Networks: Patterns, Anomalies, and Tools
ERIC Educational Resources Information Center
Akoglu, Leman
2012-01-01
Large real-world graph (a.k.a network, relational) data are omnipresent, in online media, businesses, science, and the government. Analysis of these massive graphs is crucial, in order to extract descriptive and predictive knowledge with many commercial, medical, and environmental applications. In addition to its general structure, knowing what…
Delay-time distribution in the scattering of time-narrow wave packets (II)—quantum graphs
NASA Astrophysics Data System (ADS)
Smilansky, Uzy; Schanz, Holger
2018-02-01
We apply the framework developed in the preceding paper in this series (Smilansky 2017 J. Phys. A: Math. Theor. 50 215301) to compute the time-delay distribution in the scattering of ultra short radio frequency pulses on complex networks of transmission lines which are modeled by metric (quantum) graphs. We consider wave packets which are centered at high wave number and comprise many energy levels. In the limit of pulses of very short duration we compute upper and lower bounds to the actual time-delay distribution of the radiation emerging from the network using a simplified problem where time is replaced by the discrete count of vertex-scattering events. The classical limit of the time-delay distribution is also discussed and we show that for finite networks it decays exponentially, with a decay constant which depends on the graph connectivity and the distribution of its edge lengths. We illustrate and apply our theory to a simple model graph where an algebraic decay of the quantum time-delay distribution is established.
Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking
NASA Astrophysics Data System (ADS)
He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.
2018-04-01
The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Colonna-Romano, John; Eslami, Mohammed
2017-05-01
The United States increasingly relies on cyber-physical systems to conduct military and commercial operations. Attacks on these systems have increased dramatically around the globe. The attackers constantly change their methods, making state-of-the-art commercial and military intrusion detection systems ineffective. In this paper, we present a model to identify functional behavior of network devices from netflow traces. Our model includes two innovations. First, we define novel features for a host IP using detection of application graph patterns in IP's host graph constructed from 5-min aggregated packet flows. Second, we present the first application, to the best of our knowledge, of Graph Semi-Supervised Learning (GSSL) to the space of IP behavior classification. Using a cyber-attack dataset collected from NetFlow packet traces, we show that GSSL trained with only 20% of the data achieves higher attack detection rates than Support Vector Machines (SVM) and Naïve Bayes (NB) classifiers trained with 80% of data points. We also show how to improve detection quality by filtering out web browsing data, and conclude with discussion of future research directions.
Iterated reaction graphs: simulating complex Maillard reaction pathways.
Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W
2001-01-01
This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.
A GRAPH PARTITIONING APPROACH TO PREDICTING PATTERNS IN LATERAL INHIBITION SYSTEMS
RUFINO FERREIRA, ANA S.; ARCAK, MURAT
2017-01-01
We analyze spatial patterns on networks of cells where adjacent cells inhibit each other through contact signaling. We represent the network as a graph where each vertex represents the dynamics of identical individual cells and where graph edges represent cell-to-cell signaling. To predict steady-state patterns we find equitable partitions of the graph vertices and assign them into disjoint classes. We then use results from monotone systems theory to prove the existence of patterns that are structured in such a way that all the cells in the same class have the same final fate. To study the stability properties of these patterns, we rely on the graph partition to perform a block decomposition of the system. Then, to guarantee stability, we provide a small-gain type criterion that depends on the input-output properties of each cell in the reduced system. Finally, we discuss pattern formation in stochastic models. With the help of a modal decomposition we show that noise can enhance the parameter region where patterning occurs. PMID:29225552
Hydrogen recombiner catalyst test supporting data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britton, M.D.
1995-01-19
This is a data package supporting the Hydrogen Recombiner Catalyst Performance and Carbon Monoxide Sorption Capacity Test Report, WHC-SD-WM-TRP-211, Rev 0. This report contains 10 appendices which consist of the following: Mass spectrometer analysis reports: HRC samples 93-001 through 93-157; Gas spectrometry analysis reports: HRC samples 93-141 through 93-658; Mass spectrometer procedure PNL-MA-299 ALO-284; Alternate analytical method for ammonia and water vapor; Sample log sheets; Job Safety analysis; Certificate of mixture analysis for feed gases; Flow controller calibration check; Westinghouse Standards Laboratory report on Bois flow calibrator; and Sorption capacity test data, tables, and graphs.
Mathematical biodescriptors of proteomics maps: background and applications.
Basak, Subhash C; Gute, Brian D
2008-05-01
This article reviews recent developments in the formulation and application of biodescriptors to characterize proteomics maps. Such biodescriptors can be derived by applying techniques from discrete mathematics (graph theory, linear algebra and information theory). This review focuses on the development of biodescriptors for proteomics maps derived from 2D gel electrophoresis. Preliminary results demonstrated that such descriptors have a reasonable ability to differentiate between proteomics patterns that result from exposure to closely related individual chemicals and complex mixtures, such as the jet fuel JP-8. Further research is required to evaluate the utility of these proteomics-based biodescriptors for drug discovery and predictive toxicology.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure
NASA Astrophysics Data System (ADS)
Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen
2017-01-01
An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure.
Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Overview and extensions of a system for routing directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.
Fast and asymptotic computation of the fixation probability for Moran processes on graphs.
Alcalde Cuesta, F; González Sequeiros, P; Lozano Rojo, Á
2015-03-01
Evolutionary dynamics has been classically studied for homogeneous populations, but now there is a growing interest in the non-homogeneous case. One of the most important models has been proposed in Lieberman et al. (2005), adapting to a weighted directed graph the process described in Moran (1958). The Markov chain associated with the graph can be modified by erasing all non-trivial loops in its state space, obtaining the so-called Embedded Markov chain (EMC). The fixation probability remains unchanged, but the expected time to absorption (fixation or extinction) is reduced. In this paper, we shall use this idea to compute asymptotically the average fixation probability for complete bipartite graphs K(n,m). To this end, we firstly review some recent results on evolutionary dynamics on graphs trying to clarify some points. We also revisit the 'Star Theorem' proved in Lieberman et al. (2005) for the star graphs K(1,m). Theoretically, EMC techniques allow fast computation of the fixation probability, but in practice this is not always true. Thus, in the last part of the paper, we compare this algorithm with the standard Monte Carlo method for some kind of complex networks. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Optimizing spread dynamics on graphs by message passing
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.
2013-09-01
Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).
Giusti, Chad; Ghrist, Robert; Bassett, Danielle S
2016-08-01
The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad - two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.
Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias
2018-06-01
It has not been well documented that MEG/EEG functional connectivity graphs estimated with zero-lag-free interaction metrics are severely confounded by a multitude of spurious interactions (SI), i.e., the false-positive "ghosts" of true interactions [1], [2]. These SI are caused by the multivariate linear mixing between sources, and thus they pose a severe challenge to the validity of connectivity analysis. Due to the complex nature of signal mixing and the SI problem, there is a need to intuitively demonstrate how the SI are discovered and how they can be attenuated using a novel approach that we termed hyperedge bundling. Here we provide a dataset with software with which the readers can perform simulations in order to better understand the theory and the solution to SI. We include the supplementary material of [1] that is not directly relevant to the hyperedge bundling per se but reflects important properties of the MEG source model and the functional connectivity graphs. For example, the gyri of dorsal-lateral cortices are the most accurately modeled areas; the sulci of inferior temporal, frontal and the insula have the least modeling accuracy. Importantly, we found the interaction estimates are heavily biased by the modeling accuracy between regions, which means the estimates cannot be straightforwardly interpreted as the coupling between brain regions. This raise a red flag that the conventional method of thresholding graphs by estimate values is rather suboptimal: because the measured topology of the graph reflects the geometric property of source-model instead of the cortical interactions under investigation.
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Lotfy, Hayam Mahmoud; Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Shehata, Mostafa Abd El-Atty
2018-09-05
Several spectrophotometric techniques were recently conducted for the determination of binary mixtures of clotrimazole (CLT) and dexamethasone acetate (DA) without any separation procedure. The methods were based on generation of ratio spectra of mixture then applying simple mathematic manipulation. The zero order absorption spectra of both drugs could be obtained by the constant center (CC) method. The concentration of both CLT and DA could be obtained by constant value via amplitude difference (CV-AD) method depending on ratio spectra, Ratio difference (RD) method where the difference between the amplitudes at two wavelengths (ΔP) on the ratio spectra could eliminate the contribution of the interfering substance and bring the concentration of the other, and the derivative ratio (DD 1 ) method where the derivative of the ratio spectra was able to determine the drug of interest without any interference of the other one. While the concentration of DA could be measured after graphical manipulation as concentration using the novel advanced concentration value method (ACV). Calibration graphs were linear in the range of 75-550 μg/mL for CLT and 2-20 μg/mL for DA. The methods applied to the binary mixture under study were successfully applied for the simultaneous determination of the two drugs in synthetic mixtures and in their combined form Mycuten-D cream. The results obtained were compared statistically to each other and to the official methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Graph-based segmentation for RGB-D data using 3-D geometry enhanced superpixels.
Yang, Jingyu; Gan, Ziqiao; Li, Kun; Hou, Chunping
2015-05-01
With the advances of depth sensing technologies, color image plus depth information (referred to as RGB-D data hereafter) is more and more popular for comprehensive description of 3-D scenes. This paper proposes a two-stage segmentation method for RGB-D data: 1) oversegmentation by 3-D geometry enhanced superpixels and 2) graph-based merging with label cost from superpixels. In the oversegmentation stage, 3-D geometrical information is reconstructed from the depth map. Then, a K-means-like clustering method is applied to the RGB-D data for oversegmentation using an 8-D distance metric constructed from both color and 3-D geometrical information. In the merging stage, treating each superpixel as a node, a graph-based model is set up to relabel the superpixels into semantically-coherent segments. In the graph-based model, RGB-D proximity, texture similarity, and boundary continuity are incorporated into the smoothness term to exploit the correlations of neighboring superpixels. To obtain a compact labeling, the label term is designed to penalize labels linking to similar superpixels that likely belong to the same object. Both the proposed 3-D geometry enhanced superpixel clustering method and the graph-based merging method from superpixels are evaluated by qualitative and quantitative results. By the fusion of color and depth information, the proposed method achieves superior segmentation performance over several state-of-the-art algorithms.
Akama, Hiroyuki; Miyake, Maki; Jung, Jaeyoung; Murphy, Brian
2015-01-01
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains
NASA Astrophysics Data System (ADS)
Gan, Tingyue; Cameron, Maria
2017-06-01
Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.
Identification of lethal reactions in the Esherichia coli metabolic network: Graph theory approach
NASA Astrophysics Data System (ADS)
Ghim, C.-M.; Goh, K.-I.; Kahng, B.; Kim, D.
2004-03-01
As a first step toward holistic modeling of cells, we analyze the biochemical reactions occurring in the genome-scale metabolism of Esherichia coli. To this end, we construct a directed bipartite graph by assigning metabolite or reaction to each node. We apply various measures of centrality, a well-known concept in the graph theory, and their modifications to the metabolic network, finding that there exist lethal reactions involved in the central metabolism. Such lethal reactions or associated enzymes under diverse environments in silico are identified and compared with earlier results obtained from flux balance analysis.
Revealing Long-Range Interconnected Hubs in Human Chromatin Interaction Data Using Graph Theory
NASA Astrophysics Data System (ADS)
Boulos, R. E.; Arneodo, A.; Jensen, P.; Audit, B.
2013-09-01
We use graph theory to analyze chromatin interaction (Hi-C) data in the human genome. We show that a key functional feature of the genome—“master” replication origins—corresponds to DNA loci of maximal network centrality. These loci form a set of interconnected hubs both within chromosomes and between different chromosomes. Our results open the way to a fruitful use of graph theory concepts to decipher DNA structural organization in relation to genome functions such as replication and transcription. This quantitative information should prove useful to discriminate between possible polymer models of nuclear organization.
One-dimensional swarm algorithm packaging
NASA Astrophysics Data System (ADS)
Lebedev, Boris K.; Lebedev, Oleg B.; Lebedeva, Ekaterina O.
2018-05-01
The paper considers an algorithm for solving the problem of onedimensional packaging based on the adaptive behavior model of an ant colony. The key role in the development of the ant algorithm is the choice of representation (interpretation) of the solution. The structure of the solution search graph, the procedure for finding solutions on the graph, the methods of deposition and evaporation of pheromone are described. Unlike the canonical paradigm of an ant algorithm, an ant on the solution search graph generates sets of elements distributed across blocks. Experimental studies were conducted on IBM PC. Compared with the existing algorithms, the results are improved.
Antiferromagnetic Potts Model on the Erdős-Rényi Random Graph
NASA Astrophysics Data System (ADS)
Contucci, Pierluigi; Dommers, Sander; Giardinà, Cristian; Starr, Shannon
2013-10-01
We study the antiferromagnetic Potts model on the Poissonian Erdős-Rényi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.
Study of ATES thermal behavior using a steady flow model
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstroem, G.; Tsang, C. F.; Claesson, J.
1981-01-01
The thermal behavior of a single well aquifer thermal energy storage system in which buoyancy flow is neglected is studied. A dimensionless formulation of the energy transport equations for the aquifer system is presented, and the key dimensionless parameters are discussed. A simple numerical model is used to generate graphs showing the thermal behavior of the system as a function of these parameters. Some comparisons with field experiments are given to illustrate the use of the dimensionless groups and graphs.
Lifted worm algorithm for the Ising model
NASA Astrophysics Data System (ADS)
Elçi, Eren Metin; Grimm, Jens; Ding, Lijie; Nasrawi, Abrahim; Garoni, Timothy M.; Deng, Youjin
2018-04-01
We design an irreversible worm algorithm for the zero-field ferromagnetic Ising model by using the lifting technique. We study the dynamic critical behavior of an energylike observable on both the complete graph and toroidal grids, and compare our findings with reversible algorithms such as the Prokof'ev-Svistunov worm algorithm. Our results show that the lifted worm algorithm improves the dynamic exponent of the energylike observable on the complete graph and leads to a significant constant improvement on toroidal grids.
Mendes, Maria Carolina S; Paulino, Daiane Sm; Brambilla, Sandra R; Camargo, Juliana A; Persinoti, Gabriela F; Carvalheira, José Barreto C
2018-05-14
To investigate the effect of probiotic supplementation during the development of an experimental model of colitis associated colon cancer (CAC). C57BL/6 mice received an intraperitoneal injection of azoxymethane (10 mg/kg), followed by three cycles of sodium dextran sulphate diluted in water (5% w/v). Probiotic group received daily a mixture of Lactobacillus acidophilus , Lactobacillus rhamnosus and Bifidobacterium bifidum . Microbiota composition was assessed by 16S rRNA Illumina HiSeq sequencing. Colon samples were collected for histological analysis. Tumor cytokines was assessed by Real Time-PCR (Polymerase Chain Reaction); and serum cytokines by Multiplex assay. All tests were two-sided. The level of significance was set at P < 0.05. Graphs were generated and statistical analysis performed using the software GraphPad Prism 5.0. The project was approved by the institutional review board committee. At day 60 after azoxymethane injection, the mean number of tumours in the probiotic group was 40% lower than that in the control group, and the probiotic group exhibited tumours of smaller size (< 2 mm) ( P < 0.05). There was no difference in richness and diversity between groups. However, there was a significant difference in beta diversity in the multidimensional scaling analysis. The abundance of the genera Lactobacillus , Bifidobacterium , Allobaculum , Clostridium XI and Clostridium XVIII increased in the probiotic group ( P < 0.05). The microbial change was accompanied by reduced colitis, demonstrated by a 46% reduction in the colon inflammatory index; reduced expression of the serum chemokines RANTES and Eotaxin; decreased p-IKK and TNF-α and increased IL-10 expression in the colon. Our results suggest a potential chemopreventive effect of probiotic on CAC. Probiotic supplementation changes microbiota structure and regulates the inflammatory response, reducing colitis and preventing CAC.
Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro
2015-01-01
The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.
Collinearity and Causal Diagrams: A Lesson on the Importance of Model Specification.
Schisterman, Enrique F; Perkins, Neil J; Mumford, Sunni L; Ahrens, Katherine A; Mitchell, Emily M
2017-01-01
Correlated data are ubiquitous in epidemiologic research, particularly in nutritional and environmental epidemiology where mixtures of factors are often studied. Our objectives are to demonstrate how highly correlated data arise in epidemiologic research and provide guidance, using a directed acyclic graph approach, on how to proceed analytically when faced with highly correlated data. We identified three fundamental structural scenarios in which high correlation between a given variable and the exposure can arise: intermediates, confounders, and colliders. For each of these scenarios, we evaluated the consequences of increasing correlation between the given variable and the exposure on the bias and variance for the total effect of the exposure on the outcome using unadjusted and adjusted models. We derived closed-form solutions for continuous outcomes using linear regression and empirically present our findings for binary outcomes using logistic regression. For models properly specified, total effect estimates remained unbiased even when there was almost perfect correlation between the exposure and a given intermediate, confounder, or collider. In general, as the correlation increased, the variance of the parameter estimate for the exposure in the adjusted models increased, while in the unadjusted models, the variance increased to a lesser extent or decreased. Our findings highlight the importance of considering the causal framework under study when specifying regression models. Strategies that do not take into consideration the causal structure may lead to biased effect estimation for the original question of interest, even under high correlation.
Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model
NASA Technical Reports Server (NTRS)
Segui, John S.; Jennings, Esther H.; Clare, Loren P.
2013-01-01
Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.
ERIC Educational Resources Information Center
Smith, David Arthur
2010-01-01
Much recent work in natural language processing treats linguistic analysis as an inference problem over graphs. This development opens up useful connections between machine learning, graph theory, and linguistics. The first part of this dissertation formulates syntactic dependency parsing as a dynamic Markov random field with the novel…
2010-11-30
Erdos- Renyi -Gilbert random graph [Erdos and Renyi , 1959; Gilbert, 1959], the Watts-Strogatz “small world” framework [Watts and Strogatz, 1998], and the...2003). Evolution of Networks. Oxford University Press, USA. Erdos, P. and Renyi , A. (1959). On Random Graphs. Publications Mathematicae, 6 290–297
Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets
2017-07-01
principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for
On Vieta's Formulas and the Determination of a Set of Positive Integers by Their Sum and Product
ERIC Educational Resources Information Center
Valahas, Theodoros; Boukas, Andreas
2011-01-01
In Years 9 and 10 of secondary schooling students are typically introduced to quadratic expressions and functions and related modelling, algebra, and graphing. This includes work on the expansion and factorisation of quadratic expressions (typically with integer values of coefficients), graphing quadratic functions, finding the roots of quadratic…
Dynamic airspace configuration algorithms for next generation air transportation system
NASA Astrophysics Data System (ADS)
Wei, Jian
The National Airspace System (NAS) is under great pressure to safely and efficiently handle the record-high air traffic volume nowadays, and will face even greater challenge to keep pace with the steady increase of future air travel demand, since the air travel demand is projected to increase to two to three times the current level by 2025. The inefficiency of traffic flow management initiatives causes severe airspace congestion and frequent flight delays, which cost billions of economic losses every year. To address the increasingly severe airspace congestion and delays, the Next Generation Air Transportation System (NextGen) is proposed to transform the current static and rigid radar based system to a dynamic and flexible satellite based system. New operational concepts such as Dynamic Airspace Configuration (DAC) have been under development to allow more flexibility required to mitigate the demand-capacity imbalances in order to increase the throughput of the entire NAS. In this dissertation, we address the DAC problem in the en route and terminal airspace under the framework of NextGen. We develop a series of algorithms to facilitate the implementation of innovative concepts relevant with DAC in both the en route and terminal airspace. We also develop a performance evaluation framework for comprehensive benefit analyses on different aspects of future sector design algorithms. First, we complete a graph based sectorization algorithm for DAC in the en route airspace, which models the underlying air route network with a weighted graph, converts the sectorization problem into the graph partition problem, partitions the weighted graph with an iterative spectral bipartition method, and constructs the sectors from the partitioned graph. The algorithm uses a graph model to accurately capture the complex traffic patterns of the real flights, and generates sectors with high efficiency while evenly distributing the workload among the generated sectors. We further improve the robustness and efficiency of the graph based DAC algorithm by incorporating the Multilevel Graph Partitioning (MGP) method into the graph model, and develop a MGP based sectorization algorithm for DAC in the en route airspace. In a comprehensive benefit analysis, the performance of the proposed algorithms are tested in numerical simulations with Enhanced Traffic Management System (ETMS) data. Simulation results demonstrate that the algorithmically generated sectorizations outperform the current sectorizations in different sectors for different time periods. Secondly, based on our experience with DAC in the en route airspace, we further study the sectorization problem for DAC in the terminal airspace. The differences between the en route and terminal airspace are identified, and their influence on the terminal sectorization is analyzed. After adjusting the graph model to better capture the unique characteristics of the terminal airspace and the requirements of terminal sectorization, we develop a graph based geometric sectorization algorithm for DAC in the terminal airspace. Moreover, the graph based model is combined with the region based sector design method to better handle the complicated geometric and operational constraints in the terminal sectorization problem. In the benefit analysis, we identify the contributing factors to terminal controller workload, define evaluation metrics, and develop a bebefit analysis framework for terminal sectorization evaluation. With the evaluation framework developed, we demonstrate the improvements on the current sectorizations with real traffic data collected from several major international airports in the U.S., and conduct a detailed analysis on the potential benefits of dynamic reconfiguration in the terminal airspace. Finally, in addition to the research on the macroscopic behavior of a large number of aircraft, we also study the dynamical behavior of individual aircraft from the perspective of traffic flow management. We formulate the mode-confusion problem as hybrid estimation problem, and develop a state estimation algorithm for the linear hybrid system with continuous-state-dependent transitions based on sparse observations. We also develop an estimated time of arrival prediction algorithm based on the state-dependent transition hybrid estimation algorithm, whose performance is demonstrated with simulations on the landing procedure following the Continuous Descend Approach (CDA) profile.
Venous tree separation in the liver: graph partitioning using a non-ising model.
O'Donnell, Thomas; Kaftan, Jens N; Schuh, Andreas; Tietjen, Christian; Soza, Grzegorz; Aach, Til
2011-01-01
Entangled tree-like vascular systems are commonly found in the body (e.g., in the peripheries and lungs). Separation of these systems in medical images may be formulated as a graph partitioning problem given an imperfect segmentation and specification of the tree roots. In this work, we show that the ubiquitous Ising-model approaches (e.g., Graph Cuts, Random Walker) are not appropriate for tackling this problem and propose a novel method based on recursive minimal paths for doing so. To motivate our method, we focus on the intertwined portal and hepatic venous systems in the liver. Separation of these systems is critical for liver intervention planning, in particular when resection is involved. We apply our method to 34 clinical datasets, each containing well over a hundred vessel branches, demonstrating its effectiveness.
Community structure and scale-free collections of Erdős-Rényi graphs.
Seshadhri, C; Kolda, Tamara G; Pinar, Ali
2012-05-01
Community structure plays a significant role in the analysis of social networks and similar graphs, yet this structure is little understood and not well captured by most models. We formally define a community to be a subgraph that is internally highly connected and has no deeper substructure. We use tools of combinatorics to show that any such community must contain a dense Erdős-Rényi (ER) subgraph. Based on mathematical arguments, we hypothesize that any graph with a heavy-tailed degree distribution and community structure must contain a scale-free collection of dense ER subgraphs. These theoretical observations corroborate well with empirical evidence. From this, we propose the Block Two-Level Erdős-Rényi (BTER) model, and demonstrate that it accurately captures the observable properties of many real-world social networks.
Families of Graph Algorithms: SSSP Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-08-28
Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less
The Time Window Vehicle Routing Problem Considering Closed Route
NASA Astrophysics Data System (ADS)
Irsa Syahputri, Nenna; Mawengkang, Herman
2017-12-01
The Vehicle Routing Problem (VRP) determines the optimal set of routes used by a fleet of vehicles to serve a given set of customers on a predefined graph; the objective is to minimize the total travel cost (related to the travel times or distances) and operational cost (related to the number of vehicles used). In this paper we study a variant of the predefined graph: given a weighted graph G and vertices a and b, and given a set X of closed paths in G, find the minimum total travel cost of a-b path P such that no path in X is a subpath of P. Path P is allowed to repeat vertices and edges. We use integer programming model to describe the problem. A feasible neighbourhood approach is proposed to solve the model
Wedge sampling for computing clustering coefficients and triangle counts on large graphs
Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.
2014-05-08
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less
Distributed MPC based consensus for single-integrator multi-agent systems.
Cheng, Zhaomeng; Fan, Ming-Can; Zhang, Hai-Tao
2015-09-01
This paper addresses model predictive control schemes for consensus in multi-agent systems (MASs) with discrete-time single-integrator dynamics under switching directed interaction graphs. The control horizon is extended to be greater than one which endows the closed-loop system with extra degree of freedom. We derive sufficient conditions on the sampling period and the interaction graph to achieve consensus by using the property of infinite products of stochastic matrices. Consensus can be achieved asymptotically if the sampling period is selected such that the interaction graph among agents has a directed spanning tree jointly. Significantly, if the interaction graph always has a spanning tree, one can select an arbitrary large sampling period to guarantee consensus. Finally, several simulations are conducted to illustrate the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neil, Joshua Charles; Fisk, Michael Edward; Brugh, Alexander William
A system, apparatus, computer-readable medium, and computer-implemented method are provided for detecting anomalous behavior in a network. Historical parameters of the network are determined in order to determine normal activity levels. A plurality of paths in the network are enumerated as part of a graph representing the network, where each computing system in the network may be a node in the graph and the sequence of connections between two computing systems may be a directed edge in the graph. A statistical model is applied to the plurality of paths in the graph on a sliding window basis to detect anomalousmore » behavior. Data collected by a Unified Host Collection Agent ("UHCA") may also be used to detect anomalous behavior.« less
Zhang, Qin; Yao, Quanying
2018-05-01
The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.
Planification de trajectoires pour une flotte d'UAVs
NASA Astrophysics Data System (ADS)
Ait El Cadi, Abdessamad
In this thesis we address the problem of coordinating and controlling a fleet of Unmanned Aerial Vehicles (UAVs) during a surveillance mission in a dynamic context. The problem is vast and is related to several scientific domains. We have studied three important parts of this problem: • modeling the ground with all its constraints; • computing a shortest non-holonomic continuous path in a risky environment with a presence of obstacles; • planning a surveillance mission for a fleet of UAVs in a real context. While investigating the scientific literature related to these topics, we have detected deficiencies in the modeling of the ground and in the computation of the shortest continuous path, two critical aspects for the planning of a mission. So after the literature review, we have proposed answers to these two aspects and have applied our developments to the planning of a mission of a fleet of UAVs in a risky environment with the presence of obstacles. Obstacles could be natural like mountain or any non flyable zone. We have first modeled the ground as a directed graph. However, instead of using a classic mesh, we opted for an intelligent modeling that reduces the computing time on the graph without losing accuracy. The proposed model is based on the concept of visibility graph, and it also takes into account the obstacles, the danger areas and the constraint of non-holonomy of the UAVs- the kinematic constraint of the planes that imposes a maximum steering angle. The graph is then cleaned to keep only the minimum information needed for the calculation of trajectories. The generation of this graph possibly requires a lot of computation time, but it is done only once before the planning and will not affect the performance of trajectory calculations. We have also developed another simpler graph that does not take into account the constraint of non-holonomy. The advantage of this second graph is that it reduces the computation time. However, it requires the use of a correction procedure to make the resulting trajectory non-holonomic. This correction is possible within the context of our missions, but not for all types of autonomous vehicles. Once the directed graph is generated, we propose the use of a procedure for calculating the shortest continuous non-holonomic path in a risky environment with the presence of obstacles. The directed graph already incorporates all the constraints, which makes it possible to model the problem as a shortest path problem with resource a resource constraint (the resource here is the amount of permitted risk). The results are very satisfactory since the resulting routes are non-holonomic paths that meet all constraints. Moreover, the computing time is very short. For cases based on the simpler graph, we have created a procedure for correcting the trajectory to make it non-holonomic. All calculations of non-holonomy are based on Dubins curves (1957). We have finally applied our results to the planning of a mission of a fleet of UAVs in a risky environment with the presence of obstacles. For this purpose, we have developed a directed multi-graph where, for each pair of targets (points of departure and return of the mission included), we calculate a series of shorter trajectories with different limits of risk -- from the risk-free path to the riskiest path. We then use a Tabu Search with two tabu lists. Using these procedures, we have been able to produce routes for a fleet of UAVs that minimize the cost of the mission while respecting the limit of risk and avoiding obstacles. Tests are conducted on examples created on the basis of descriptions given by the Canadian Defense and, also on some instances of the CVRP (Capacitated Vehicle Routing Problem), those described by Christofides et Elion and those described by Christofides, Mingozzi et Toth. The results are of very satisfactory since all trajectories are non-holonomic and the improvement of the objective, when compared to a simple constructive method, achieves in some cases between 10 % and 43 %. We have even obtained an improvement of 69 %, but on a poor solution generated by a greedy algorithm. (Abstract shortened by UMI.)
Using Rich Social Media Information for Music Recommendation via Hypergraph Model
NASA Astrophysics Data System (ADS)
Tan, Shulong; Bu, Jiajun; Chen, Chun; He, Xiaofei
There are various kinds of social media information, including different types of objects and relations among these objects, in music social communities such as Last.fm and Pandora. This information is valuable for music recommendation. However, there are two main challenges to exploit this rich social media information: (a) There are many different types of objects and relations in music social communities, which makes it difficult to develop a unified framework taking into account all objects and relations. (b) In these communities, some relations are much more sophisticated than pairwise relation, and thus cannot be simply modeled by a graph. We propose a novel music recommendation algorithm by using both multiple kinds of social media information and music acoustic-based content. Instead of graph, we use hypergraph to model the various objects and relations, and consider music recommendation as a ranking problem on this hypergraph. While an edge of an ordinary graph connects only two objects, a hyperedge represents a set of objects. In this way, hypergraph can be naturally used to model high-order relations.
BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.
Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter
2013-02-01
Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of streamline tractography algorithms or the assumption of a noise distribution. Moreover, the BootGraph can be applied to common DTI data sets without further modifications and shows a high repeatability. Thus, it is very well suited for longitudinal studies and meta-studies based on DTI. Copyright © 2012 Elsevier Inc. All rights reserved.
Friction behaviour of aluminium composites mixed with carbon fibers with different orientations
NASA Astrophysics Data System (ADS)
Caliman, R.
2016-08-01
The primary goal of this study work it was to distinguish a mixture of materials with enhanced friction and wearing behaviour. The composite materials may be differentiated from alloys; which can contain two more components but are formed naturally through different processes such as casting. The load applied on the specimen during the tests, is playing a very important role regarding friction coefficient and also the wearing speed. Sintered composites are gaining importance because the reinforcement serves to reduce the coefficient of thermal expansion and increase the strength and modulus. The friction tests are carried out, at the room temperature in dry condition, on a pin-on-disc machine. The exponentially decreasing areas form graphs, represented to the curves coefficient of friction, are attributed to the formation of lubricant transfer film and initial polishing surface samples. The influence of the orientation of the carbon fibers on the friction properties in the sintered polymer composites may be studied by the use of both mechanical wear tests by microscopy and through the use of phenomenological models.
Stroganov, Oleg V; Novikov, Fedor N; Zeifman, Alexey A; Stroylov, Viktor S; Chilov, Ghermes G
2011-09-01
A new graph-theoretical approach called thermodynamic sampling of amino acid residues (TSAR) has been elaborated to explicitly account for the protein side chain flexibility in modeling conformation-dependent protein properties. In TSAR, a protein is viewed as a graph whose nodes correspond to structurally independent groups and whose edges connect the interacting groups. Each node has its set of states describing conformation and ionization of the group, and each edge is assigned an array of pairwise interaction potentials between the adjacent groups. By treating the obtained graph as a belief-network-a well-established mathematical abstraction-the partition function of each node is found. In the current work we used TSAR to calculate partition functions of the ionized forms of protein residues. A simplified version of a semi-empirical molecular mechanical scoring function, borrowed from our Lead Finder docking software, was used for energy calculations. The accuracy of the resulting model was validated on a set of 486 experimentally determined pK(a) values of protein residues. The average correlation coefficient (R) between calculated and experimental pK(a) values was 0.80, ranging from 0.95 (for Tyr) to 0.61 (for Lys). It appeared that the hydrogen bond interactions and the exhaustiveness of side chain sampling made the most significant contribution to the accuracy of pK(a) calculations. Copyright © 2011 Wiley-Liss, Inc.
Prediction of energy expenditure and physical activity in preschoolers.
Butte, Nancy F; Wong, William W; Lee, Jong Soo; Adolph, Anne L; Puyau, Maurice R; Zakeri, Issa F
2014-06-01
Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) for the prediction of EE using room calorimetry and doubly labeled water (DLW) and established accelerometry cut points for PA levels. Fifty preschoolers, mean ± SD age of 4.5 ± 0.8 yr, participated in room calorimetry for minute-by-minute measurements of EE, accelerometer counts (AC) (Actiheart and ActiGraph GT3X+), and HR (Actiheart). Free-living 105 children, ages 4.6 ± 0.9 yr, completed the 7-d DLW procedure while wearing the devices. AC cut points for PA levels were established using smoothing splines and receiver operating characteristic curves. On the basis of calorimetry, mean percent errors for EE were -2.9% ± 10.8% and -1.1% ± 7.4% for CSTS models and -1.9% ± 9.6% and 1.3% ± 8.1% for MARS models using the Actiheart and ActiGraph+HR devices, respectively. On the basis of DLW, mean percent errors were -0.5% ± 9.7% and 4.1% ± 8.5% for CSTS models and 3.2% ± 10.1% and 7.5% ± 10.0% for MARS models using the Actiheart and ActiGraph+HR devices, respectively. Applying activity EE thresholds, final accelerometer cut points were determined: 41, 449, and 1297 cpm for Actiheart x-axis; 820, 3908, and 6112 cpm for ActiGraph vector magnitude; and 240, 2120, and 4450 cpm for ActiGraph x-axis for sedentary/light, light/moderate, and moderate/vigorous PA (MVPA), respectively. On the basis of confusion matrices, correctly classified rates were 81%-83% for sedentary PA, 58%-64% for light PA, and 62%-73% for MVPA. The lack of bias and acceptable limits of agreement affirms the validity of the CSTS and MARS models for the prediction of EE in preschool-aged children. Accelerometer cut points are satisfactory for the classification of sedentary, light, and moderate/vigorous levels of PA in preschoolers.
Model-based multiple patterning layout decomposition
NASA Astrophysics Data System (ADS)
Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.
2015-10-01
As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.
Stavrakas, Vassilis; Melas, Ioannis N; Sakellaropoulos, Theodore; Alexopoulos, Leonidas G
2015-01-01
Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN) with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.
Acceleration of Binding Site Comparisons by Graph Partitioning.
Krotzky, Timo; Klebe, Gerhard
2015-08-01
The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
TreeNetViz: revealing patterns of networks over tree structures.
Gou, Liang; Zhang, Xiaolong Luke
2011-12-01
Network data often contain important attributes from various dimensions such as social affiliations and areas of expertise in a social network. If such attributes exhibit a tree structure, visualizing a compound graph consisting of tree and network structures becomes complicated. How to visually reveal patterns of a network over a tree has not been fully studied. In this paper, we propose a compound graph model, TreeNet, to support visualization and analysis of a network at multiple levels of aggregation over a tree. We also present a visualization design, TreeNetViz, to offer the multiscale and cross-scale exploration and interaction of a TreeNet graph. TreeNetViz uses a Radial, Space-Filling (RSF) visualization to represent the tree structure, a circle layout with novel optimization to show aggregated networks derived from TreeNet, and an edge bundling technique to reduce visual complexity. Our circular layout algorithm reduces both total edge-crossings and edge length and also considers hierarchical structure constraints and edge weight in a TreeNet graph. These experiments illustrate that the algorithm can reduce visual cluttering in TreeNet graphs. Our case study also shows that TreeNetViz has the potential to support the analysis of a compound graph by revealing multiscale and cross-scale network patterns. © 2011 IEEE
Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong
2011-01-01
Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.
Locating sources within a dense sensor array using graph clustering
NASA Astrophysics Data System (ADS)
Gerstoft, P.; Riahi, N.
2017-12-01
We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.
Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.
Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A
2016-08-25
There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.
Energy Minimization of Discrete Protein Titration State Models Using Graph Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.
There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less
Energy Minimization of Discrete Protein Titration State Models Using Graph Theory
Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.
2016-01-01
There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174
Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation
Barker, Andrew T.; Lee, Chak S.; Vassilevski, Panayot S.
2017-10-26
Here, we consider coarsening procedures for graph Laplacian problems written in a mixed saddle-point form. In that form, in addition to the original (vertex) degrees of freedom (dofs), we also have edge degrees of freedom. We extend previously developed aggregation-based coarsening procedures applied to both sets of dofs to now allow more than one coarse vertex dof per aggregate. Those dofs are selected as certain eigenvectors of local graph Laplacians associated with each aggregate. Additionally, we coarsen the edge dofs by using traces of the discrete gradients of the already constructed coarse vertex dofs. These traces are defined on themore » interface edges that connect any two adjacent aggregates. The overall procedure is a modification of the spectral upscaling procedure developed in for the mixed finite element discretization of diffusion type PDEs which has the important property of maintaining inf-sup stability on coarse levels and having provable approximation properties. We consider applications to partitioning a general graph and to a finite volume discretization interpreted as a graph Laplacian, developing consistent and accurate coarse-scale models of a fine-scale problem.« less