Sample records for graph transformation rules

  1. The Replicator Equation on Graphs

    PubMed Central

    Ohtsuki, Hisashi; Nowak, Martin A.

    2008-01-01

    We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth-death’, ‘death-birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth-death updating in our model. We use pair-approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner’s Dilemma, the Snow-Drift game, a coordination game and the Rock-Scissors-Paper game. PMID:16860343

  2. Incorporating Technology and Cooperative Learning to Teach Function Transformations

    ERIC Educational Resources Information Center

    Boz, Burçak; Erbilgin, Evrim

    2015-01-01

    When teaching transformations of functions, teachers typically have students vary the coefficients of equations and examine the resulting changes in the graph. This approach, however, may lead students to memorise rules related to transformations. Students need opportunities to think deeply about transformations beyond superficial observations…

  3. Transforming the Way We Teach Function Transformations

    ERIC Educational Resources Information Center

    Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.

    2010-01-01

    In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…

  4. Transformations of Mathematical and Stimulus Functions

    PubMed Central

    Ninness, Chris; Barnes-Holmes, Dermot; Rumph, Robin; McCuller, Glen; Ford, Angela M; Payne, Robert; Ninness, Sharon K; Smith, Ronald J; Ward, Todd A; Elliott, Marc P

    2006-01-01

    Following a pretest, 8 participants who were unfamiliar with algebraic and trigonometric functions received a brief presentation on the rectangular coordinate system. Next, they participated in a computer-interactive matching-to-sample procedure that trained formula-to-formula and formula-to-graph relations. Then, they were exposed to 40 novel formula-to-graph tests and 10 novel graph-to-formula tests. Seven of the 8 participants showed substantial improvement in identifying formula-to-graph relations; however, in the test of novel graph-to-formula relations, participants tended to select equations in their factored form. Next, we manipulated contextual cues in the form of rules regarding mathematical preferences. First, we informed participants that standard forms of equations were preferred over factored forms. In a subsequent test of 10 additional novel graph-to-formula relations, participants shifted their selections to favor equations in their standard form. This preference reversed during 10 more tests when financial reward was made contingent on correct identification of formulas in factored form. Formula preferences and transformation of novel mathematical and stimulus functions are discussed. PMID:17020211

  5. A Brief Historical Introduction to Matrices and Their Applications

    ERIC Educational Resources Information Center

    Debnath, L.

    2014-01-01

    This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…

  6. A new network representation of the metabolism to detect chemical transformation modules.

    PubMed

    Sorokina, Maria; Medigue, Claudine; Vallenet, David

    2015-11-14

    Metabolism is generally modeled by directed networks where nodes represent reactions and/or metabolites. In order to explore metabolic pathway conservation and divergence among organisms, previous studies were based on graph alignment to find similar pathways. Few years ago, the concept of chemical transformation modules, also called reaction modules, was introduced and correspond to sequences of chemical transformations which are conserved in metabolism. We propose here a novel graph representation of the metabolic network where reactions sharing a same chemical transformation type are grouped in Reaction Molecular Signatures (RMS). RMS were automatically computed for all reactions and encode changes in atoms and bonds. A reaction network containing all available metabolic knowledge was then reduced by an aggregation of reaction nodes and edges to obtain a RMS network. Paths in this network were explored and a substantial number of conserved chemical transformation modules was detected. Furthermore, this graph-based formalism allows us to define several path scores reflecting different biological conservation meanings. These scores are significantly higher for paths corresponding to known metabolic pathways and were used conjointly to build association rules that should predict metabolic pathway types like biosynthesis or degradation. This representation of metabolism in a RMS network offers new insights to capture relevant metabolic contexts. Furthermore, along with genomic context methods, it should improve the detection of gene clusters corresponding to new metabolic pathways.

  7. A fast algorithm for vertex-frequency representations of signals on graphs

    PubMed Central

    Jestrović, Iva; Coyle, James L.; Sejdić, Ervin

    2016-01-01

    The windowed Fourier transform (short time Fourier transform) and the S-transform are widely used signal processing tools for extracting frequency information from non-stationary signals. Previously, the windowed Fourier transform had been adopted for signals on graphs and has been shown to be very useful for extracting vertex-frequency information from graphs. However, high computational complexity makes these algorithms impractical. We sought to develop a fast windowed graph Fourier transform and a fast graph S-transform requiring significantly shorter computation time. The proposed schemes have been tested with synthetic test graph signals and real graph signals derived from electroencephalography recordings made during swallowing. The results showed that the proposed schemes provide significantly lower computation time in comparison with the standard windowed graph Fourier transform and the fast graph S-transform. Also, the results showed that noise has no effect on the results of the algorithm for the fast windowed graph Fourier transform or on the graph S-transform. Finally, we showed that graphs can be reconstructed from the vertex-frequency representations obtained with the proposed algorithms. PMID:28479645

  8. Hierarchical graphs for rule-based modeling of biochemical systems

    PubMed Central

    2011-01-01

    Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR) complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for specifying rule-based models, such as the BioNetGen language (BNGL). Thus, the proposed use of hierarchical graphs should promote clarity and better understanding of rule-based models. PMID:21288338

  9. Modelling Chemical Reasoning to Predict and Invent Reactions.

    PubMed

    Segler, Marwin H S; Waller, Mark P

    2017-05-02

    The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Kirchhoff's rule for quantum wires

    NASA Astrophysics Data System (ADS)

    Kostrykin, V.; Schrader, R.

    1999-01-01

    We formulate and discuss one-particle quantum scattering theory on an arbitrary finite graph with n open ends and where we define the Hamiltonian to be (minus) the Laplace operator with general boundary conditions at the vertices. This results in a scattering theory with n channels. The corresponding on-shell S-matrix formed by the reflection and transmission amplitudes for incoming plane waves of energy E>0 is given explicitly in terms of the boundary conditions and the lengths of the internal lines. It is shown to be unitary, which may be viewed as the quantum version of Kirchhoff's law. We exhibit covariance and symmetry properties. It is symmetric if the boundary conditions are real. Also there is a duality transformation on the set of boundary conditions and the lengths of the internal lines such that the low-energy behaviour of one theory gives the high-energy behaviour of the transformed theory. Finally, we provide a composition rule by which the on-shell S-matrix of a graph is factorizable in terms of the S-matrices of its subgraphs. All proofs use only known facts from the theory of self-adjoint extensions, standard linear algebra, complex function theory and elementary arguments from the theory of Hermitian symplectic forms.

  11. Automatic Generation of Supervisory Control System Software Using Graph Composition

    NASA Astrophysics Data System (ADS)

    Nakata, Hideo; Sano, Tatsuro; Kojima, Taizo; Seo, Kazuo; Uchida, Tomoyuki; Nakamura, Yasuaki

    This paper describes the automatic generation of system descriptions for SCADA (Supervisory Control And Data Acquisition) systems. The proposed method produces various types of data and programs for SCADA systems from equipment definitions using conversion rules. At first, this method makes directed graphs, which represent connections between the equipment, from equipment definitions. System descriptions are generated using the conversion rules, by analyzing these directed graphs, and finding the groups of equipment that involve similar operations. This method can make the conversion rules multi levels by using the composition of graphs, and can reduce the number of rules. The developer can define and manage these rules efficiently.

  12. Hierarchical graphs for better annotations of rule-based models of biochemical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Bin; Hlavacek, William

    2009-01-01

    In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of amore » molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.« less

  13. Generic strategies for chemical space exploration.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2014-01-01

    The chemical universe of molecules reachable from a set of start compounds by iterative application of a finite number of reactions is usually so vast, that sophisticated and efficient exploration strategies are required to cope with the combinatorial complexity. A stringent analysis of (bio)chemical reaction networks, as approximations of these complex chemical spaces, forms the foundation for the understanding of functional relations in Chemistry and Biology. Graphs and graph rewriting are natural models for molecules and reactions. Borrowing the idea of partial evaluation from functional programming, we introduce partial applications of rewrite rules. A framework for the specification of exploration strategies in graph-rewriting systems is presented. Using key examples of complex reaction networks from carbohydrate chemistry we demonstrate the feasibility of this high-level strategy framework. While being designed for chemical applications, the framework can also be used to emulate higher-level transformation models such as illustrated in a small puzzle game.

  14. An image understanding system using attributed symbolic representation and inexact graph-matching

    NASA Astrophysics Data System (ADS)

    Eshera, M. A.; Fu, K.-S.

    1986-09-01

    A powerful image understanding system using a semantic-syntactic representation scheme consisting of attributed relational graphs (ARGs) is proposed for the analysis of the global information content of images. A multilayer graph transducer scheme performs the extraction of ARG representations from images, with ARG nodes representing the global image features, and the relations between features represented by the attributed branches between corresponding nodes. An efficient dynamic programming technique is employed to derive the distance between two ARGs and the inexact matching of their respective components. Noise, distortion and ambiguity in real-world images are handled through modeling in the transducer mapping rules and through the appropriate cost of error-transformation for the inexact matching of the representation. The system is demonstrated for the case of locating objects in a scene composed of complex overlapped objects, and the case of target detection in noisy and distorted synthetic aperture radar image.

  15. GraDit: graph-based data repair algorithm for multiple data edits rule violations

    NASA Astrophysics Data System (ADS)

    Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

  16. Granular Flow Graph, Adaptive Rule Generation and Tracking.

    PubMed

    Pal, Sankar Kumar; Chakraborty, Debarati Bhunia

    2017-12-01

    A new method of adaptive rule generation in granular computing framework is described based on rough rule base and granular flow graph, and applied for video tracking. In the process, several new concepts and operations are introduced, and methodologies formulated with superior performance. The flow graph enables in defining an intelligent technique for rule base adaptation where its characteristics in mapping the relevance of attributes and rules in decision-making system are exploited. Two new features, namely, expected flow graph and mutual dependency between flow graphs are defined to make the flow graph applicable in the tasks of both training and validation. All these techniques are performed in neighborhood granular level. A way of forming spatio-temporal 3-D granules of arbitrary shape and size is introduced. The rough flow graph-based adaptive granular rule-based system, thus produced for unsupervised video tracking, is capable of handling the uncertainties and incompleteness in frames, able to overcome the incompleteness in information that arises without initial manual interactions and in providing superior performance and gaining in computation time. The cases of partial overlapping and detecting the unpredictable changes are handled efficiently. It is shown that the neighborhood granulation provides a balanced tradeoff between speed and accuracy as compared to pixel level computation. The quantitative indices used for evaluating the performance of tracking do not require any information on ground truth as in the other methods. Superiority of the algorithm to nonadaptive and other recent ones is demonstrated extensively.

  17. SPARQL Query Re-writing Using Partonomy Based Transformation Rules

    NASA Astrophysics Data System (ADS)

    Jain, Prateek; Yeh, Peter Z.; Verma, Kunal; Henson, Cory A.; Sheth, Amit P.

    Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology's containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries.

  18. Decision net, directed graph, and neural net processing of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki; Barnard, Etienne

    1989-01-01

    A decision-net solution involving a novel hierarchical classifier and a set of multiple directed graphs, as well as a neural-net solution, are respectively presented for large-class problem and mixture problem treatments of imaging spectrometer data. The clustering method for hierarchical classifier design, when used with multiple directed graphs, yields an efficient decision net. New directed-graph rules for reducing local maxima as well as the number of perturbations required, and the new starting-node rules for extending the reachability and reducing the search time of the graphs, are noted to yield superior results, as indicated by an illustrative 500-class imaging spectrometer problem.

  19. Automated visualization of rule-based models

    PubMed Central

    Tapia, Jose-Juan; Faeder, James R.

    2017-01-01

    Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816

  20. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1989-01-01

    The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.

  1. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1988-01-01

    This final report describes the results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems. This was approached by developing a translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could effect the output from a set of rules.

  2. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

    NASA Astrophysics Data System (ADS)

    Xie, Tian; Grossman, Jeffrey C.

    2018-04-01

    The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 1 04 data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

  3. A real-time expert system for self-repairing flight control

    NASA Technical Reports Server (NTRS)

    Gaither, S. A.; Agarwal, A. K.; Shah, S. C.; Duke, E. L.

    1989-01-01

    An integrated environment for specifying, prototyping, and implementing a self-repairing flight-control (SRFC) strategy is described. At an interactive workstation, the user can select paradigms such as rule-based expert systems, state-transition diagrams, and signal-flow graphs and hierarchically nest them, assign timing and priority attributes, establish blackboard-type communication, and specify concurrent execution on single or multiple processors. High-fidelity nonlinear simulations of aircraft and SRFC systems can be performed off-line, with the possibility of changing SRFC rules, inference strategies, and other heuristics to correct for control deficiencies. Finally, the off-line-generated SRFC can be transformed into highly optimized application-specific real-time C-language code. An application of this environment to the design of aircraft fault detection, isolation, and accommodation algorithms is presented in detail.

  4. Bifurcation and Fractal of the Coupled Logistic Map

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luo, Chao

    The nature of the fixed points of the coupled Logistic map is researched, and the boundary equation of the first bifurcation of the coupled Logistic map in the parameter space is given out. Using the quantitative criterion and rule of system chaos, i.e., phase graph, bifurcation graph, power spectra, the computation of the fractal dimension, and the Lyapunov exponent, the paper reveals the general characteristics of the coupled Logistic map transforming from regularity to chaos, the following conclusions are shown: (1) chaotic patterns of the coupled Logistic map may emerge out of double-periodic bifurcation and Hopf bifurcation, respectively; (2) during the process of double-period bifurcation, the system exhibits self-similarity and scale transform invariability in both the parameter space and the phase space. From the research of the attraction basin and Mandelbrot-Julia set of the coupled Logistic map, the following conclusions are indicated: (1) the boundary between periodic and quasiperiodic regions is fractal, and that indicates the impossibility to predict the moving result of the points in the phase plane; (2) the structures of the Mandelbrot-Julia sets are determined by the control parameters, and their boundaries have the fractal characteristic.

  5. A signal-flow-graph approach to on-line gradient calculation.

    PubMed

    Campolucci, P; Uncini, A; Piazza, F

    2000-08-01

    A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple one-input, one-output transformation, as in an electrical circuit. Even if graph representations are popular in the neural network community, they are often used for qualitative description rather than for rigorous representation and computational purposes. In this article, a method for both on-line and batch-backward gradient computation of a system output or cost function with respect to system parameters is derived by the SFG representation theory and its known properties. The system can be any causal, in general nonlinear and time-variant, dynamic system represented by an SFG, in particular any feedforward, time-delay, or recurrent neural network. In this work, we use discrete-time notation, but the same theory holds for the continuous-time case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (obtained from the first by simple transformations), without the complex chain rule expansions of derivatives usually employed. This method can be used for sensitivity analysis and for learning both off-line and on-line. On-line learning is particularly important since it is required by many real applications, such as digital signal processing, system identification and control, channel equalization, and predistortion.

  6. Communication: Analysing kinetic transition networks for rare events.

    PubMed

    Stevenson, Jacob D; Wales, David J

    2014-07-28

    The graph transformation approach is a recently proposed method for computing mean first passage times, rates, and committor probabilities for kinetic transition networks. Here we compare the performance to existing linear algebra methods, focusing on large, sparse networks. We show that graph transformation provides a much more robust framework, succeeding when numerical precision issues cause the other methods to fail completely. These are precisely the situations that correspond to rare event dynamics for which the graph transformation was introduced.

  7. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  8. Monetary Policy Rules, Supply Shocks, and the Price-Level Elasticity of Aggregate Demand: A Graphical Examination.

    ERIC Educational Resources Information Center

    Kyer, Ben L.; Maggs, Gary E.

    1995-01-01

    Utilizes two-dimensional price and output graphs to demonstrate the way that the price-level elasticity of aggregate demand affects alternative monetary policy rules designed to cope with random aggregate supply shocks. Includes graphs illustrating price-level, real Gross Domestic Product (GDP), nominal GDP, and nominal money supply targeting.…

  9. Connections between the Sznajd model with general confidence rules and graph theory

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  10. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  11. Mathematical formula recognition using graph grammar

    NASA Astrophysics Data System (ADS)

    Lavirotte, Stephane; Pottier, Loic

    1998-04-01

    This paper describes current results of Ofr, a system for extracting and understanding mathematical expressions in documents. Such a tool could be really useful to be able to re-use knowledge in scientific books which are not available in electronic form. We currently also study use of this system for direct input of formulas with a graphical tablet for computer algebra system softwares. Existing solutions for mathematical recognition have problems to analyze 2D expressions like vectors and matrices. This is because they often try to use extended classical grammar to analyze formulas, relatively to baseline. But a lot of mathematical notations do not respect rules for such a parsing and that is the reason why they fail to extend text parsing technic. We investigate graph grammar and graph rewriting as a solution to recognize 2D mathematical notations. Graph grammar provide a powerful formalism to describe structural manipulations of multi-dimensional data. The main two problems to solve are ambiguities between rules of grammar and construction of graph.

  12. The investigation of social networks based on multi-component random graphs

    NASA Astrophysics Data System (ADS)

    Zadorozhnyi, V. N.; Yudin, E. B.

    2018-01-01

    The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.

  13. Manipulations of Cartesian Graphs: A First Introduction to Analysis.

    ERIC Educational Resources Information Center

    Lowenthal, Francis; Vandeputte, Christiane

    1989-01-01

    Introduces an introductory module for analysis. Describes stock of basic functions and their graphs as part one and three methods as part two: transformations of simple graphs, the sum of stock functions, and upper and lower bounds. (YP)

  14. Optimizing the Replication of Multi-Quality Web Applications Using ACO and WoLF

    DTIC Science & Technology

    2006-09-14

    bipartite graph in both directions as they construct solutions, pheromone is used for traversing from one side of the bipartite graph to the other and back...27 3.1.3 Transitioning From 〈d, q〉 pairs to Servers. . . . . 29 3.1.4 Pheromone Update Rule . . . . . . . . . . . . . . 30 vi Page 3.2 WoLFAntDA: A...35 3.2.6 Pheromone Update Rule . . . . . . . . . . . . . . 36 3.2.7 Policy Updates . . . . . . . . . . . . . . . . . . . 36 3.3 The Server-Filling

  15. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  16. Expert system validation in prolog

    NASA Technical Reports Server (NTRS)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  17. Detecting false positives in multielement designs: implications for brief assessments.

    PubMed

    Bartlett, Sara M; Rapp, John T; Henrickson, Marissa L

    2011-11-01

    The authors assessed the extent to which multielement designs produced false positives using continuous duration recording (CDR) and interval recording with 10-s and 1-min interval sizes. Specifically, they created 6,000 graphs with multielement designs that varied in the number of data paths, and the number of data points per data path, using a random number generator. In Experiment 1, the authors visually analyzed the graphs for the occurrence of false positives. Results indicated that graphs depicting only two sessions for each condition (e.g., a control condition plotted with multiple test conditions) produced the highest percentage of false positives for CDR and interval recording with 10-s and 1-min intervals. Conversely, graphs with four or five sessions for each condition produced the lowest percentage of false positives for each method. In Experiment 2, they applied two new rules, which were intended to decrease false positives, to each graph that depicted a false positive in Experiment 1. Results showed that application of new rules decreased false positives to less than 5% for all of the graphs except for those with two data paths and two data points per data path. Implications for brief assessments are discussed.

  18. How accurate are interpretations of curriculum-based measurement progress monitoring data? Visual analysis versus decision rules.

    PubMed

    Van Norman, Ethan R; Christ, Theodore J

    2016-10-01

    Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving - continue the intervention, or the student is not improving - discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  19. Coverability graphs for a class of synchronously executed unbounded Petri net

    NASA Technical Reports Server (NTRS)

    Stotts, P. David; Pratt, Terrence W.

    1990-01-01

    After detailing a variant of the concurrent-execution rule for firing of maximal subsets, in which the simultaneous firing of conflicting transitions is prohibited, an algorithm is constructed for generating the coverability graph of a net executed under this synchronous firing rule. The omega insertion criteria in the algorithm are shown to be valid for any net on which the algorithm terminates. It is accordingly shown that the set of nets on which the algorithm terminates includes the 'conflict-free' class.

  20. A simple rule for the evolution of cooperation on graphs and social networks.

    PubMed

    Ohtsuki, Hisashi; Hauert, Christoph; Lieberman, Erez; Nowak, Martin A

    2006-05-25

    A fundamental aspect of all biological systems is cooperation. Cooperative interactions are required for many levels of biological organization ranging from single cells to groups of animals. Human society is based to a large extent on mechanisms that promote cooperation. It is well known that in unstructured populations, natural selection favours defectors over cooperators. There is much current interest, however, in studying evolutionary games in structured populations and on graphs. These efforts recognize the fact that who-meets-whom is not random, but determined by spatial relationships or social networks. Here we describe a surprisingly simple rule that is a good approximation for all graphs that we have analysed, including cycles, spatial lattices, random regular graphs, random graphs and scale-free networks: natural selection favours cooperation, if the benefit of the altruistic act, b, divided by the cost, c, exceeds the average number of neighbours, k, which means b/c > k. In this case, cooperation can evolve as a consequence of 'social viscosity' even in the absence of reputation effects or strategic complexity.

  1. Nonschematic drawing recognition: a new approach based on attributed graph grammar with flexible embedding

    NASA Astrophysics Data System (ADS)

    Lee, Kyu J.; Kunii, T. L.; Noma, T.

    1993-01-01

    In this paper, we propose a syntactic pattern recognition method for non-schematic drawings, based on a new attributed graph grammar with flexible embedding. In our graph grammar, the embedding rule permits the nodes of a guest graph to be arbitrarily connected with the nodes of a host graph. The ambiguity caused by this flexible embedding is controlled with the evaluation of synthesized attributes and the check of context sensitivity. To integrate parsing with the synthesized attribute evaluation and the context sensitivity check, we also develop a bottom up parsing algorithm.

  2. Discriminative graph embedding for label propagation.

    PubMed

    Nguyen, Canh Hao; Mamitsuka, Hiroshi

    2011-09-01

    In many applications, the available information is encoded in graph structures. This is a common problem in biological networks, social networks, web communities and document citations. We investigate the problem of classifying nodes' labels on a similarity graph given only a graph structure on the nodes. Conventional machine learning methods usually require data to reside in some Euclidean spaces or to have a kernel representation. Applying these methods to nodes on graphs would require embedding the graphs into these spaces. By embedding and then learning the nodes on graphs, most methods are either flexible with different learning objectives or efficient enough for large scale applications. We propose a method to embed a graph into a feature space for a discriminative purpose. Our idea is to include label information into the embedding process, making the space representation tailored to the task. We design embedding objective functions that the following learning formulations become spectral transforms. We then reformulate these spectral transforms into multiple kernel learning problems. Our method, while being tailored to the discriminative tasks, is efficient and can scale to massive data sets. We show the need of discriminative embedding on some simulations. Applying to biological network problems, our method is shown to outperform baselines.

  3. A Qualitative Approach to Sketch the Graph of a Function.

    ERIC Educational Resources Information Center

    Alson, Pedro

    1992-01-01

    Presents a qualitative and global method of graphing functions that involves transformations of the graph of a known function in the cartesian coordinate system referred to as graphic operators. Explains how the method has been taught to students and some comments about the results obtained. (MDH)

  4. Euclidean commute time distance embedding and its application to spectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Albano, James A.; Messinger, David W.

    2012-06-01

    Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.

  5. Information extraction and knowledge graph construction from geoscience literature

    NASA Astrophysics Data System (ADS)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen

    2018-03-01

    Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.

  6. Graphs for information security control in software defined networks

    NASA Astrophysics Data System (ADS)

    Grusho, Alexander A.; Abaev, Pavel O.; Shorgin, Sergey Ya.; Timonina, Elena E.

    2017-07-01

    Information security control in software defined networks (SDN) is connected with execution of the security policy rules regulating information accesses and protection against distribution of the malicious code and harmful influences. The paper offers a representation of a security policy in the form of hierarchical structure which in case of distribution of resources for the solution of tasks defines graphs of admissible interactions in a networks. These graphs define commutation tables of switches via the SDN controller.

  7. Combining Human and Machine Intelligence to Derive Agents' Behavioral Rules for Groundwater Irrigation

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Quinn, C.; Cai, X.

    2015-12-01

    One major challenge of agent-based modeling is to derive agents' behavioral rules due to behavioral uncertainty and data scarcity. This study proposes a new approach to combine a data-driven modeling based on the directed information (i.e., machine intelligence) with expert domain knowledge (i.e., human intelligence) to derive the behavioral rules of agents considering behavioral uncertainty. A directed information graph algorithm is applied to identifying the causal relationships between agents' decisions (i.e., groundwater irrigation depth) and time-series of environmental, socio-economical and institutional factors. A case study is conducted for the High Plains aquifer hydrological observatory (HO) area, U.S. Preliminary results show that four factors, corn price (CP), underlying groundwater level (GWL), monthly mean temperature (T) and precipitation (P) have causal influences on agents' decisions on groundwater irrigation depth (GWID) to various extents. Based on the similarity of the directed information graph for each agent, five clusters of graphs are further identified to represent all the agents' behaviors in the study area as shown in Figure 1. Using these five representative graphs, agents' monthly optimal groundwater pumping rates are derived through the probabilistic inference. Such data-driven relationships and probabilistic quantifications are then coupled with a physically-based groundwater model to investigate the interactions between agents' pumping behaviors and the underlying groundwater system in the context of coupled human and natural systems.

  8. Model-based morphological segmentation and labeling of coronary angiograms.

    PubMed

    Haris, K; Efstratiadis, S N; Maglaveras, N; Pappas, C; Gourassas, J; Louridas, G

    1999-10-01

    A method for extraction and labeling of the coronary arterial tree (CAT) using minimal user supervision in single-view angiograms is proposed. The CAT structural description (skeleton and borders) is produced, along with quantitative information for the artery dimensions and assignment of coded labels, based on a given coronary artery model represented by a graph. The stages of the method are: 1) CAT tracking and detection; 2) artery skeleton and border estimation; 3) feature graph creation; and iv) artery labeling by graph matching. The approximate CAT centerline and borders are extracted by recursive tracking based on circular template analysis. The accurate skeleton and borders of each CAT segment are computed, based on morphological homotopy modification and watershed transform. The approximate centerline and borders are used for constructing the artery segment enclosing area (ASEA), where the defined skeleton and border curves are considered as markers. Using the marked ASEA, an artery gradient image is constructed where all the ASEA pixels (except the skeleton ones) are assigned the gradient magnitude of the original image. The artery gradient image markers are imposed as its unique regional minima by the homotopy modification method, the watershed transform is used for extracting the artery segment borders, and the feature graph is updated. Finally, given the created feature graph and the known model graph, a graph matching algorithm assigns the appropriate labels to the extracted CAT using weighted maximal cliques on the association graph corresponding to the two given graphs. Experimental results using clinical digitized coronary angiograms are presented.

  9. Graph Databases for Large-Scale Healthcare Systems: A Framework for Efficient Data Management and Data Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yubin; Shankar, Mallikarjun; Park, Byung H.

    Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less

  10. Forward-Chaining Versus A Graph Approach As The Inference Engine In Expert Systems

    NASA Astrophysics Data System (ADS)

    Neapolitan, Richard E.

    1986-03-01

    Rule-based expert systems are those in which a certain number of IF-THEN rules are assumed to be true. Based on the verity of some assertions, the rules deduce as many new conclusions as possible. A standard technique used to make these deductions is forward-chaining. In forward-chaining, the program or 'inference engine' cycles through the rules. At each rule, the premises for the rule are checked against the current true assertions. If all the premises are found, the conclusion is added to the list of true assertions. At that point it is necessary to start over at the first rule, since the new conclusion may be a premise in a rule already checked. Therefore, each time a new conclusion is deduced it is necessary to start the rule checking procedure over. This process continues until no new conclusions are added and the end of the list of rules is reached. The above process, although quite costly in terms of CPU cycles due to the necessity of repeatedly starting the process over, is necessary if the rules contain 'pattern variables'. An example of such a rule is, 'IF X IS A BACTERIA, THEN X CAN BE TREATED WITH ANTIBIOTICS'. Since the rule can lead to conclusions for many values of X, it is necessary to check each premise in the rule against every true assertion producing an association list to be used in the checking of the next premise. However, if the rule does not contain variable data, as is the case in many current expert systems, then a rule can lead to only one conclusion. In this case, the rules can be stored in a graph, and the true assertions in an assertion list. The assertion list is traversed only once; at each assertion a premise is triggered in all the rules which have that assertion as a premise. When all premises for a rule trigger, the rule's conclusion is added to the END of the list of assertions. It must be added at the end so that it will eventually be used to make further deductions. In the current paper, the two methods are described in detail, the relative advantages of each is discussed, and a benchmark comparing the CPU cycles consumed by each is included. It is also shown that, in the case of reasoning under uncertainty, it is possible to properly combine the certainties derived from rules arguing for the same conclusion when the graph approach is used.

  11. The Vertex Version of Weighted Wiener Number for Bicyclic Molecular Structures

    PubMed Central

    Gao, Wei

    2015-01-01

    Graphs are used to model chemical compounds and drugs. In the graphs, each vertex represents an atom of molecule and edges between the corresponding vertices are used to represent covalent bounds between atoms. We call such a graph, which is derived from a chemical compound, a molecular graph. Evidence shows that the vertex-weighted Wiener number, which is defined over this molecular graph, is strongly correlated to both the melting point and boiling point of the compounds. In this paper, we report the extremal vertex-weighted Wiener number of bicyclic molecular graph in terms of molecular structural analysis and graph transformations. The promising prospects of the application for the chemical and pharmacy engineering are illustrated by theoretical results achieved in this paper. PMID:26640513

  12. Graphing trillions of triangles.

    PubMed

    Burkhardt, Paul

    2017-07-01

    The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed.

  13. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  14. Large fluctuations in anti-coordination games on scale-free graphs

    NASA Astrophysics Data System (ADS)

    Sabsovich, Daniel; Mobilia, Mauro; Assaf, Michael

    2017-05-01

    We study the influence of the complex topology of scale-free graphs on the dynamics of anti-coordination games (e.g. snowdrift games). These reference models are characterized by the coexistence (evolutionary stable mixed strategy) of two competing species, say ‘cooperators’ and ‘defectors’, and, in finite systems, by metastability and large-fluctuation-driven fixation. In this work, we use extensive computer simulations and an effective diffusion approximation (in the weak selection limit) to determine under which circumstances, depending on the individual-based update rules, the topology drastically affects the long-time behavior of anti-coordination games. In particular, we compute the variance of the number of cooperators in the metastable state and the mean fixation time when the dynamics is implemented according to the voter model (death-first/birth-second process) and the link dynamics (birth/death or death/birth at random). For the voter update rule, we show that the scale-free topology effectively renormalizes the population size and as a result the statistics of observables depend on the network’s degree distribution. In contrast, such a renormalization does not occur with the link dynamics update rule and we recover the same behavior as on complete graphs.

  15. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  16. Consistent latent position estimation and vertex classification for random dot product graphs.

    PubMed

    Sussman, Daniel L; Tang, Minh; Priebe, Carey E

    2014-01-01

    In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph derived from Wikipedia.

  17. Graphing trillions of triangles

    PubMed Central

    Burkhardt, Paul

    2016-01-01

    The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed. PMID:28690426

  18. A graph grammar approach to artificial life.

    PubMed

    Kniemeyer, Ole; Buck-Sorlin, Gerhard H; Kurth, Winfried

    2004-01-01

    We present the high-level language of relational growth grammars (RGGs) as a formalism designed for the specification of ALife models. RGGs can be seen as an extension of the well-known parametric Lindenmayer systems and contain rule-based, procedural, and object-oriented features. They are defined as rewriting systems operating on graphs with the edges coming from a set of user-defined relations, whereas the nodes can be associated with objects. We demonstrate their ability to represent genes, regulatory networks of metabolites, and morphologically structured organisms, as well as developmental aspects of these entities, in a common formal framework. Mutation, crossing over, selection, and the dynamics of a network of gene regulation can all be represented with simple graph rewriting rules. This is demonstrated in some detail on the classical example of Dawkins' biomorphs and the ABC model of flower morphogenesis: other applications are briefly sketched. An interactive program was implemented, enabling the execution of the formalism and the visualization of the results.

  19. The Development of a Graphical User Interface Engine for the Convenient Use of the HL7 Version 2.x Interface Engine

    PubMed Central

    Kim, Hwa Sun; Cho, Hune

    2011-01-01

    Objectives The Health Level Seven Interface Engine (HL7 IE), developed by Kyungpook National University, has been employed in health information systems, however users without a background in programming have reported difficulties in using it. Therefore, we developed a graphical user interface (GUI) engine to make the use of the HL7 IE more convenient. Methods The GUI engine was directly connected with the HL7 IE to handle the HL7 version 2.x messages. Furthermore, the information exchange rules (called the mapping data), represented by a conceptual graph in the GUI engine, were transformed into program objects that were made available to the HL7 IE; the mapping data were stored as binary files for reuse. The usefulness of the GUI engine was examined through information exchange tests between an HL7 version 2.x message and a health information database system. Results Users could easily create HL7 version 2.x messages by creating a conceptual graph through the GUI engine without requiring assistance from programmers. In addition, time could be saved when creating new information exchange rules by reusing the stored mapping data. Conclusions The GUI engine was not able to incorporate information types (e.g., extensible markup language, XML) other than the HL7 version 2.x messages and the database, because it was designed exclusively for the HL7 IE protocol. However, in future work, by including additional parsers to manage XML-based information such as Continuity of Care Documents (CCD) and Continuity of Care Records (CCR), we plan to ensure that the GUI engine will be more widely accessible for the health field. PMID:22259723

  20. The Development of a Graphical User Interface Engine for the Convenient Use of the HL7 Version 2.x Interface Engine.

    PubMed

    Kim, Hwa Sun; Cho, Hune; Lee, In Keun

    2011-12-01

    The Health Level Seven Interface Engine (HL7 IE), developed by Kyungpook National University, has been employed in health information systems, however users without a background in programming have reported difficulties in using it. Therefore, we developed a graphical user interface (GUI) engine to make the use of the HL7 IE more convenient. The GUI engine was directly connected with the HL7 IE to handle the HL7 version 2.x messages. Furthermore, the information exchange rules (called the mapping data), represented by a conceptual graph in the GUI engine, were transformed into program objects that were made available to the HL7 IE; the mapping data were stored as binary files for reuse. The usefulness of the GUI engine was examined through information exchange tests between an HL7 version 2.x message and a health information database system. Users could easily create HL7 version 2.x messages by creating a conceptual graph through the GUI engine without requiring assistance from programmers. In addition, time could be saved when creating new information exchange rules by reusing the stored mapping data. The GUI engine was not able to incorporate information types (e.g., extensible markup language, XML) other than the HL7 version 2.x messages and the database, because it was designed exclusively for the HL7 IE protocol. However, in future work, by including additional parsers to manage XML-based information such as Continuity of Care Documents (CCD) and Continuity of Care Records (CCR), we plan to ensure that the GUI engine will be more widely accessible for the health field.

  1. NOUS: A Knowledge Graph Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knowledge graphs represent information as entities and relationships between them. For tasks such as natural language question answering or automated analysis of text, a knowledge graph provides valuable context to establish the specific type of entities being discussed. It allow us to derive better context about newly arriving information and leads to intelligent reasoning capabilities. We address two primary needs: A) Automated construction of knowledge graphs is a technically challenging, expensive process; and B) The ability to synthesize new information by monitoring newly emerging knowledge is a transformational capability that does not exist in state of the art systems.

  2. 49 CFR Appendix I to Part 511 - Final Prehearing Order

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to Rule 21 of the Administration's Rules of Practice for Adjudicative Proceedings, on the day of , 19..., graphs, models, schematic diagrams, and similar objects that will be used in opening statements or... prior to signing. It will control the course of the hearing, and it may not be amended except by consent...

  3. Two classes of bipartite networks: nested biological and social systems.

    PubMed

    Burgos, Enrique; Ceva, Horacio; Hernández, Laura; Perazzo, R P J; Devoto, Mariano; Medan, Diego

    2008-10-01

    Bipartite graphs have received some attention in the study of social networks and of biological mutualistic systems. A generalization of a previous model is presented, that evolves the topology of the graph in order to optimally account for a given contact preference rule between the two guilds of the network. As a result, social and biological graphs are classified as belonging to two clearly different classes. Projected graphs, linking the agents of only one guild, are obtained from the original bipartite graph. The corresponding evolution of its statistical properties is also studied. An example of a biological mutualistic network is analyzed in detail, and it is found that the model provides a very good fitting of all the main statistical features. The model also provides a proper qualitative description of the same features observed in social webs, suggesting the possible reasons underlying the difference in the organization of these two kinds of bipartite networks.

  4. Reflecting on Graphs: Attributes of Graph Choice and Construction Practices in Biology

    PubMed Central

    Angra, Aakanksha; Gardner, Stephanie M.

    2017-01-01

    Undergraduate biology education reform aims to engage students in scientific practices such as experimental design, experimentation, and data analysis and communication. Graphs are ubiquitous in the biological sciences, and creating effective graphical representations involves quantitative and disciplinary concepts and skills. Past studies document student difficulties with graphing within the contexts of classroom or national assessments without evaluating student reasoning. Operating under the metarepresentational competence framework, we conducted think-aloud interviews to reveal differences in reasoning and graph quality between undergraduate biology students, graduate students, and professors in a pen-and-paper graphing task. All professors planned and thought about data before graph construction. When reflecting on their graphs, professors and graduate students focused on the function of graphs and experimental design, while most undergraduate students relied on intuition and data provided in the task. Most undergraduate students meticulously plotted all data with scaled axes, while professors and some graduate students transformed the data, aligned the graph with the research question, and reflected on statistics and sample size. Differences in reasoning and approaches taken in graph choice and construction corroborate and extend previous findings and provide rich targets for undergraduate and graduate instruction. PMID:28821538

  5. Using Graphing to Reveal the Hidden Transformations in Palindrome (and Other Types of) Licence Plates

    ERIC Educational Resources Information Center

    Nivens, Ryan Andrew

    2016-01-01

    This article provides a range of activities designed to engage students in using an early form of graphing. While the "Australian Curriculum: Mathematics" (2014) highlights understanding, fluency, problem-solving, and reasoning, the National Research Council (2001) describes five strands of mathematical proficiency, with the additional…

  6. A Graph Theory Practice on Transformed Image: A Random Image Steganography

    PubMed Central

    Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857

  7. Evolutionary graph theory: breaking the symmetry between interaction and replacement

    PubMed Central

    Ohtsuki, Hisashi; Pacheco, Jorge M.; Nowak, Martin A.

    2008-01-01

    We study evolutionary dynamics in a population whose structure is given by two graphs: the interaction graph determines who plays with whom in an evolutionary game; the replacement graph specifies the geometry of evolutionary competition and updating. First, we calculate the fixation probabilities of frequency dependent selection between two strategies or phenotypes. We consider three different update mechanisms: birth-death, death-birth and imitation. Then, as a particular example, we explore the evolution of cooperation. Suppose the interaction graph is a regular graph of degree h, the replacement graph is a regular graph of degree g and the overlap between the two graphs is a regular graph of degree l. We show that cooperation is favored by natural selection if b/c > hg/l. Here, b and c denote the benefit and cost of the altruistic act. This result holds for death-birth updating, weak selection and large population size. Note that the optimum population structure for cooperators is given by maximum overlap between the interaction and the replacement graph (g = h = l), which means that the two graphs are identical. We also prove that a modified replicator equation can describe how the expected values of the frequencies of an arbitrary number of strategies change on replacement and interaction graphs: the two graphs induce a transformation of the payoff matrix. PMID:17350049

  8. A Fuzzy Rule-Base Model for Classification of Spirometric FVC Graphs in Chronical Obstructive Pulmonary Diseases

    DTIC Science & Technology

    2001-10-25

    questionnaire was filled out before test which is very important criteria for each subject for investigating are smoking cigarettes, having asthma, chronic...9, 12]. Secondly, observed FVC and FEV1 data plotted are taken under investigation. According to the questionnaire filled out, subjects... questionnaire filled in by each subject on the result of the diagnosing (categorizing FVC graphs) COPD, elimination of erroneous factors affecting

  9. Reference values of MostGraph measures for middle-aged and elderly Japanese individuals who participated in annual health checkups.

    PubMed

    Abe, Yuki; Shibata, Yoko; Igarashi, Akira; Inoue, Sumito; Sato, Kento; Sato, Masamichi; Nemoto, Takako; Kobayashi, Maki; Nishiwaki, Michiko; Kimura, Tomomi; Tokairin, Yoshikane; Kayama, Takamasa; Kubota, Isao

    2016-05-01

    The forced oscillation technique (FOT) can measure respiratory system resistance and reactance under tidal volume respiration. MostGraph is a device that incorporates the FOT and enables the immediate, three-dimensional visualization of resistance and reactance parameters. The aim of this study was to establish MostGraph reference values for middle-aged and elderly Japanese individuals. From 2004 to 2006, 3253 subjects living in Takahata, Yamagata underwent spirometry. Of these, 872 again underwent spirometry in 2011, and 784 (368 men, ages 46-89 years; 416 women, ages 47-90 years) underwent FOT examinations using MostGraph-01. In this study population, 19.0% of the men and 91.5% of the women were life-long never smokers. Abnormal spirometric findings were observed in 30.2% of the men and 14.6% of the women. Although the respiratory system resistance and reactance parameters obtained using MostGraph were not distributed normally, normal distribution was achieved via natural logarithm (R5, R20, Fres, and ALX), square root (R5-R20), or exponential (X5) transformation. Furthermore, the transformed values were converted back to the actual values after determining the values representing one and two standard deviations from the mean. Respiratory system resistance and reactance reference values were determined using MostGraph in middle-aged and elderly Japanese individuals who participated in annual health checkups. Copyright © 2016 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  10. Carbon Nanotubes' Effect on Mitochondrial Oxygen Flux Dynamics: Polarography Experimental Study and Machine Learning Models using Star Graph Trace Invariants of Raman Spectra.

    PubMed

    González-Durruthy, Michael; Monserrat, Jose M; Rasulev, Bakhtiyor; Casañola-Martín, Gerardo M; Barreiro Sorrivas, José María; Paraíso-Medina, Sergio; Maojo, Víctor; González-Díaz, Humberto; Pazos, Alejandro; Munteanu, Cristian R

    2017-11-11

    This study presents the impact of carbon nanotubes (CNTs) on mitochondrial oxygen mass flux ( J m ) under three experimental conditions. New experimental results and a new methodology are reported for the first time and they are based on CNT Raman spectra star graph transform (spectral moments) and perturbation theory. The experimental measures of J m showed that no tested CNT family can inhibit the oxygen consumption profiles of mitochondria. The best model for the prediction of J m for other CNTs was provided by random forest using eight features, obtaining test R-squared ( R ²) of 0.863 and test root-mean-square error (RMSE) of 0.0461. The results demonstrate the capability of encoding CNT information into spectral moments of the Raman star graphs (SG) transform with a potential applicability as predictive tools in nanotechnology and material risk assessments.

  11. An Ada inference engine for expert systems

    NASA Technical Reports Server (NTRS)

    Lavallee, David B.

    1986-01-01

    The purpose is to investigate the feasibility of using Ada for rule-based expert systems with real-time performance requirements. This includes exploring the Ada features which give improved performance to expert systems as well as optimizing the tradeoffs or workarounds that the use of Ada may require. A prototype inference engine was built using Ada, and rule firing rates in excess of 500 per second were demonstrated on a single MC68000 processor. The knowledge base uses a directed acyclic graph to represent production lines. The graph allows the use of AND, OR, and NOT logical operators. The inference engine uses a combination of both forward and backward chaining in order to reach goals as quickly as possible. Future efforts will include additional investigation of multiprocessing to improve performance and creating a user interface allowing rule input in an Ada-like syntax. Investigation of multitasking and alternate knowledge base representations will help to analyze some of the performance issues as they relate to larger problems.

  12. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  13. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    PubMed Central

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  14. An Automated Method for Identifying Inconsistencies within Diagrammatic Software Requirements Specifications

    NASA Technical Reports Server (NTRS)

    Zhang, Zhong

    1997-01-01

    The development of large-scale, composite software in a geographically distributed environment is an evolutionary process. Often, in such evolving systems, striving for consistency is complicated by many factors, because development participants have various locations, skills, responsibilities, roles, opinions, languages, terminology and different degrees of abstraction they employ. This naturally leads to many partial specifications or viewpoints. These multiple views on the system being developed usually overlap. From another aspect, these multiple views give rise to the potential for inconsistency. Existing CASE tools do not efficiently manage inconsistencies in distributed development environment for a large-scale project. Based on the ViewPoints framework the WHERE (Web-Based Hypertext Environment for requirements Evolution) toolkit aims to tackle inconsistency management issues within geographically distributed software development projects. Consequently, WHERE project helps make more robust software and support software assurance process. The long term goal of WHERE tools aims to the inconsistency analysis and management in requirements specifications. A framework based on Graph Grammar theory and TCMJAVA toolkit is proposed to detect inconsistencies among viewpoints. This systematic approach uses three basic operations (UNION, DIFFERENCE, INTERSECTION) to study the static behaviors of graphic and tabular notations. From these operations, subgraphs Query, Selection, Merge, Replacement operations can be derived. This approach uses graph PRODUCTIONS (rewriting rules) to study the dynamic transformations of graphs. We discuss the feasibility of implementation these operations. Also, We present the process of porting original TCM (Toolkit for Conceptual Modeling) project from C++ to Java programming language in this thesis. A scenario based on NASA International Space Station Specification is discussed to show the applicability of our approach. Finally, conclusion and future work about inconsistency management issues in WHERE project will be summarized.

  15. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata

    PubMed Central

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-01-01

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664

  16. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    PubMed

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  17. Memory and other properties of multiple test procedures generated by entangled graphs.

    PubMed

    Maurer, Willi; Bretz, Frank

    2013-05-10

    Methods for addressing multiplicity in clinical trials have attracted much attention during the past 20 years. They include the investigation of new classes of multiple test procedures, such as fixed sequence, fallback and gatekeeping procedures. More recently, sequentially rejective graphical test procedures have been introduced to construct and visualize complex multiple test strategies. These methods propagate the local significance level of a rejected null hypothesis to not-yet rejected hypotheses. In the graph defining the test procedure, hypotheses together with their local significance levels are represented by weighted vertices and the propagation rule by weighted directed edges. An algorithm provides the rules for updating the local significance levels and the transition weights after rejecting an individual hypothesis. These graphical procedures have no memory in the sense that the origin of the propagated significance level is ignored in subsequent iterations. However, in some clinical trial applications, memory is desirable to reflect the underlying dependence structure of the study objectives. In such cases, it would allow the further propagation of significance levels to be dependent on their origin and thus reflect the grouped parent-descendant structures of the hypotheses. We will give examples of such situations and show how to induce memory and other properties by convex combination of several individual graphs. The resulting entangled graphs provide an intuitive way to represent the underlying relative importance relationships between the hypotheses, are as easy to perform as the original individual graphs, remain sequentially rejective and control the familywise error rate in the strong sense. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Network Reliability: The effect of local network structure on diffusive processes

    PubMed Central

    Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen

    2014-01-01

    This paper re-introduces the network reliability polynomial – introduced by Moore and Shannon in 1956 – for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well-suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable. PMID:24329321

  19. Brain network dynamics characterization in epileptic seizures. Joint directed graph and pairwise synchronization measures

    NASA Astrophysics Data System (ADS)

    Rodrigues, A. C.; Machado, B. S.; Florence, G.; Hamad, A. P.; Sakamoto, A. C.; Fujita, A.; Baccalá, L. A.; Amaro, E.; Sameshima, K.

    2014-12-01

    Here we propose and evaluate a new approach to analyse multichannel mesial temporal lobe epilepsy EEG data from eight patients through complex network and synchronization theories. The method employs a Granger causality test to infer the directed connectivity graphs and a wavelet transform based phase synchronization measure whose characteristics allow studying dynamical transitions during epileptic seizures. We present a new combined graph measure that quantifies the level of network hub formation, called network hub out-degree, which closely reflects the level of synchronization observed during the ictus.

  20. 4 CFR 22.9 - Subpoenas [Rule 9].

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... information (including writings, papers, books, accounts, photographs, drawings, graphs, charts, recordings... and the mileage allowed by 28 U.S.C. 1821 or other applicable law; however, where the subpoena is...

  1. Peripartum Cardiomyopathy

    MedlinePlus

    ... heart rate and rhythm, to look for abnormal electric con- duction, and to rule out a heart ... CAD IDC PPCM Figure.   Survival in patients with car- diomyopathy. This bar graph shows the predicted 5- ...

  2. 39 CFR 230.11 - What special definitions apply to these rules?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., calendar and diary entries, graphs, notes, charts, tabulations, data analyses, statistical or information accumulations, records of meetings and conversations, film impressions, magnetic tapes, computer discs, and...

  3. Genome alignment with graph data structures: a comparison

    PubMed Central

    2014-01-01

    Background Recent advances in rapid, low-cost sequencing have opened up the opportunity to study complete genome sequences. The computational approach of multiple genome alignment allows investigation of evolutionarily related genomes in an integrated fashion, providing a basis for downstream analyses such as rearrangement studies and phylogenetic inference. Graphs have proven to be a powerful tool for coping with the complexity of genome-scale sequence alignments. The potential of graphs to intuitively represent all aspects of genome alignments led to the development of graph-based approaches for genome alignment. These approaches construct a graph from a set of local alignments, and derive a genome alignment through identification and removal of graph substructures that indicate errors in the alignment. Results We compare the structures of commonly used graphs in terms of their abilities to represent alignment information. We describe how the graphs can be transformed into each other, and identify and classify graph substructures common to one or more graphs. Based on previous approaches, we compile a list of modifications that remove these substructures. Conclusion We show that crucial pieces of alignment information, associated with inversions and duplications, are not visible in the structure of all graphs. If we neglect vertex or edge labels, the graphs differ in their information content. Still, many ideas are shared among all graph-based approaches. Based on these findings, we outline a conceptual framework for graph-based genome alignment that can assist in the development of future genome alignment tools. PMID:24712884

  4. A Robust False Matching Points Detection Method for Remote Sensing Image Registration

    NASA Astrophysics Data System (ADS)

    Shan, X. J.; Tang, P.

    2015-04-01

    Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.

  5. Real World Cognitive Multi-Tasking and Problem Solving: A Large Scale Cognitive Architecture Simulation Through High Performance Computing-Project Casie

    DTIC Science & Technology

    2008-03-01

    computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph

  6. The signed permutation group on Feynman graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purkart, Julian, E-mail: purkart@physik.hu-berlin.de

    2016-08-15

    The Feynman rules assign to every graph an integral which can be written as a function of a scaling parameter L. Assuming L for the process under consideration is very small, so that contributions to the renormalization group are small, we can expand the integral and only consider the lowest orders in the scaling. The aim of this article is to determine specific combinations of graphs in a scalar quantum field theory that lead to a remarkable simplification of the first non-trivial term in the perturbation series. It will be seen that the result is independent of the renormalization schememore » and the scattering angles. To achieve that goal we will utilize the parametric representation of scalar Feynman integrals as well as the Hopf algebraic structure of the Feynman graphs under consideration. Moreover, we will present a formula which reduces the effort of determining the first-order term in the perturbation series for the specific combination of graphs to a minimum.« less

  7. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  8. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  9. Graphing evolutionary pattern and process: a history of techniques in archaeology and paleobiology.

    PubMed

    Lyman, R Lee

    2009-02-01

    Graphs displaying evolutionary patterns are common in paleontology and in United States archaeology. Both disciplines subscribed to a transformational theory of evolution and graphed evolution as a sequence of archetypes in the late nineteenth and early twentieth centuries. U.S. archaeologists in the second decade of the twentieth century, and paleontologists shortly thereafter, developed distinct graphic styles that reflected the Darwinian variational model of evolution. Paleobiologists adopted the view of a species as a set of phenotypically variant individuals and graphed those variations either as central tendencies or as histograms of frequencies of variants. Archaeologists presumed their artifact types reflected cultural norms of prehistoric artisans and the frequency of specimens in each type reflected human choice and type popularity. They graphed cultural evolution as shifts in frequencies of specimens representing each of several artifact types. Confusion of pattern and process is exemplified by a paleobiologist misinterpreting the process illustrated by an archaeological graph, and an archaeologist misinterpreting the process illustrated by a paleobiological graph. Each style of graph displays particular evolutionary patterns and implies particular evolutionary processes. Graphs of a multistratum collection of prehistoric mammal remains and a multistratum collection of artifacts demonstrate that many graph styles can be used for both kinds of collections.

  10. Algebra Aerobics

    ERIC Educational Resources Information Center

    Barnes, Julie; Jaqua, Kathy

    2011-01-01

    A kinesthetic approach to developing ideas of function transformations can get students physically and intellectually involved. This article presents low- or no-cost activities which use kinesthetics to support high school students' mathematical understanding of transformations of function graphs. The important point of these activities is to help…

  11. Intelligent Distributed Systems

    DTIC Science & Technology

    2015-10-23

    periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as

  12. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  13. Overlapping community detection based on link graph using distance dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Jing; Cai, Li-Jun

    2018-01-01

    The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.

  14. Controlling bi-partite entanglement in multi-qubit systems

    NASA Astrophysics Data System (ADS)

    Plesch, Martin; Novotný, Jaroslav; Dzuráková, Zuzana; Buzek, Vladimír

    2004-02-01

    Bi-partite entanglement in multi-qubit systems cannot be shared freely. The rules of quantum mechanics impose bounds on how multi-qubit systems can be correlated. In this paper, we utilize a concept of entangled graphs with weighted edges in order to analyse pure quantum states of multi-qubit systems. Here qubits are represented by vertexes of the graph, while the presence of bi-partite entanglement is represented by an edge between corresponding vertexes. The weight of each edge is defined to be the entanglement between the two qubits connected by the edge, as measured by the concurrence. We prove that each entangled graph with entanglement bounded by a specific value of the concurrence can be represented by a pure multi-qubit state. In addition, we present a logic network with O(N2) elementary gates that can be used for preparation of the weighted entangled graphs of N qubits.

  15. Overview of Sparse Graph for Multiple Access in Future Mobile Networks

    NASA Astrophysics Data System (ADS)

    Lei, Jing; Li, Baoguo; Li, Erbao; Gong, Zhenghui

    2017-10-01

    Multiple access via sparse graph, such as low density signature (LDS) and sparse code multiple access (SCMA), is a promising technique for future wireless communications. This survey presents an overview of the developments in this burgeoning field, including transmitter structures, extrinsic information transform (EXIT) chart analysis and comparisons with existing multiple access techniques. Such technique enables multiple access under overloaded conditions to achieve a satisfactory performance. Message passing algorithm is utilized for multi-user detection in the receiver, and structures of the sparse graph are illustrated in detail. Outlooks and challenges of this technique are also presented.

  16. Dynamic Uncertain Causality Graph for Knowledge Representation and Reasoning: Utilization of Statistical Data and Domain Knowledge in Complex Cases.

    PubMed

    Zhang, Qin; Yao, Quanying

    2018-05-01

    The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.

  17. Assessment of tautomer distribution using the condensed reaction graph approach

    NASA Astrophysics Data System (ADS)

    Gimadiev, T. R.; Madzhidov, T. I.; Nugmanov, R. I.; Baskin, I. I.; Antipin, I. S.; Varnek, A.

    2018-03-01

    We report the first direct QSPR modeling of equilibrium constants of tautomeric transformations (logK T ) in different solvents and at different temperatures, which do not require intermediate assessment of acidity (basicity) constants for all tautomeric forms. The key step of the modeling consisted in the merging of two tautomers in one sole molecular graph ("condensed reaction graph") which enables to compute molecular descriptors characterizing entire equilibrium. The support vector regression method was used to build the models. The training set consisted of 785 transformations belonging to 11 types of tautomeric reactions with equilibrium constants measured in different solvents and at different temperatures. The models obtained perform well both in cross-validation (Q2 = 0.81 RMSE = 0.7 logK T units) and on two external test sets. Benchmarking studies demonstrate that our models outperform results obtained with DFT B3LYP/6-311 ++ G(d,p) and ChemAxon Tautomerizer applicable only in water at room temperature.

  18. An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional

    NASA Astrophysics Data System (ADS)

    van Gennip, Yves

    2018-06-01

    We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.

  19. Group learning versus local learning: Which is prefer for public cooperation?

    NASA Astrophysics Data System (ADS)

    Yang, Shi-Han; Song, Qi-Qing

    2018-01-01

    We study the evolution of cooperation in public goods games on various graphs, focusing on the effects that are brought by different kinds of strategy donors. This highlights a basic feature of a public good game, for which there exists a remarkable difference between the interactive players and the players who are imitated. A player can learn from all the groups where the player is a member or from the typically local nearest neighbors, and the results show that the group learning rules have better performance in promoting cooperation on many networks than the local learning rules. The heterogeneity of networks' degree may be an effective mechanism for harvesting the cooperation expectation in many cases, however, we find that heterogeneity does not definitely mean the high frequency of cooperators in a population under group learning rules. It was shown that cooperators always hardly evolve whenever the interaction and the replacement do not coincide for evolutionary pairwise dilemmas on graphs, while for PG games we find that breaking the symmetry is conducive to the survival of cooperators.

  20. Feedback topology and XOR-dynamics in Boolean networks with varying input structure

    NASA Astrophysics Data System (ADS)

    Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.

    2009-08-01

    We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.

  1. Feedback topology and XOR-dynamics in Boolean networks with varying input structure.

    PubMed

    Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M

    2009-08-01

    We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.

  2. Overview and extensions of a system for routing directed graphs on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1988-01-01

    Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.

  3. Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction

    NASA Astrophysics Data System (ADS)

    Terlecki, Pawel; Walczak, Krzysztof

    This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.

  4. Growth and structure of the World Wide Web: Towards realistic modeling

    NASA Astrophysics Data System (ADS)

    Tadić, Bosiljka

    2002-08-01

    We simulate evolution of the World Wide Web from the dynamic rules incorporating growth, bias attachment, and rewiring. We show that the emergent double-hierarchical structure with distinct distributions of out- and in-links is comparable with the observed empirical data when the control parameter (average graph flexibility β) is kept in the range β=3-4. We then explore the Web graph by simulating (a) Web crawling to determine size and depth of connected components, and (b) a random walker that discovers the structure of connected subgraphs with dominant attractor and promoter nodes. A random walker that adapts its move strategy to mimic local node linking preferences is shown to have a short access time to "important" nodes on the Web graph.

  5. 76 FR 50148 - Notice of Intent to Negotiate Proposed Rule on Energy Efficiency Standards for Distribution...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... Intent to Negotiate Proposed Rule on Energy Efficiency Standards for Distribution Transformers AGENCY... transformers. The purpose of the subcommittee will be to discuss and, if possible, reach consensus on a proposed rule for the energy efficiency of distribution transformers, as authorized by the Energy Policy...

  6. Alignment of Tractograms As Graph Matching.

    PubMed

    Olivetti, Emanuele; Sharmin, Nusrat; Avesani, Paolo

    2016-01-01

    The white matter pathways of the brain can be reconstructed as 3D polylines, called streamlines, through the analysis of diffusion magnetic resonance imaging (dMRI) data. The whole set of streamlines is called tractogram and represents the structural connectome of the brain. In multiple applications, like group-analysis, segmentation, or atlasing, tractograms of different subjects need to be aligned. Typically, this is done with registration methods, that transform the tractograms in order to increase their similarity. In contrast with transformation-based registration methods, in this work we propose the concept of tractogram correspondence, whose aim is to find which streamline of one tractogram corresponds to which streamline in another tractogram, i.e., a map from one tractogram to another. As a further contribution, we propose to use the relational information of each streamline, i.e., its distances from the other streamlines in its own tractogram, as the building block to define the optimal correspondence. We provide an operational procedure to find the optimal correspondence through a combinatorial optimization problem and we discuss its similarity to the graph matching problem. In this work, we propose to represent tractograms as graphs and we adopt a recent inexact sub-graph matching algorithm to approximate the solution of the tractogram correspondence problem. On tractograms generated from the Human Connectome Project dataset, we report experimental evidence that tractogram correspondence, implemented as graph matching, provides much better alignment than affine registration and comparable if not better results than non-linear registration of volumes.

  7. Scalable Adaptive Architectures for Maritime Operations Center Command and Control

    DTIC Science & Technology

    2011-05-06

    the project to investigate the possibility of using earlier work on the validation and verification of rule bases in addressing the dynamically ...support the organization. To address the dynamically changing rules of engagement of a maritime force as it crosses different geographical areas, GMU... dynamic analysis, makes use of an Occurrence Graph that corresponds to the dynamics (or execution) of the Petri Net, to capture properties

  8. Theoretical and subjective bit assignments in transform picture

    NASA Technical Reports Server (NTRS)

    Jones, H. W., Jr.

    1977-01-01

    It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.

  9. Graph Analytics for Signature Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Johnson, John R.; Halappanavar, Mahantesh

    2013-06-01

    Within large amounts of seemingly unstructured data it can be diffcult to find signatures of events. In our work we transform unstructured data into a graph representation. By doing this we expose underlying structure in the data and can take advantage of existing graph analytics capabilities, as well as develop new capabilities. Currently we focus on applications in cybersecurity and communication domains. Within cybersecurity we aim to find signatures for perpetrators using the pass-the-hash attack, and in communications we look for emails or phone calls going up or down a chain of command. In both of these areas, and inmore » many others, the signature we look for is a path with certain temporal properties. In this paper we discuss our methodology for finding these temporal paths within large graphs.« less

  10. Forest fire autonomous decision system based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Lu, Jianhua

    2010-11-01

    The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.

  11. Emergence of cooperation in non-scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Yichao; Aziz-Alaoui, M. A.; Bertelle, Cyrille; Zhou, Shi; Wang, Wenting

    2014-06-01

    Evolutionary game theory is one of the key paradigms behind many scientific disciplines from science to engineering. Previous studies proposed a strategy updating mechanism, which successfully demonstrated that the scale-free network can provide a framework for the emergence of cooperation. Instead, individuals in random graphs and small-world networks do not favor cooperation under this updating rule. However, a recent empirical result shows the heterogeneous networks do not promote cooperation when humans play a prisoner’s dilemma. In this paper, we propose a strategy updating rule with payoff memory. We observe that the random graphs and small-world networks can provide even better frameworks for cooperation than the scale-free networks in this scenario. Our observations suggest that the degree heterogeneity may be neither a sufficient condition nor a necessary condition for the widespread cooperation in complex networks. Also, the topological structures are not sufficed to determine the level of cooperation in complex networks.

  12. Transforming graph states using single-qubit operations.

    PubMed

    Dahlberg, Axel; Wehner, Stephanie

    2018-07-13

    Stabilizer states form an important class of states in quantum information, and are of central importance in quantum error correction. Here, we provide an algorithm for deciding whether one stabilizer (target) state can be obtained from another stabilizer (source) state by single-qubit Clifford operations (LC), single-qubit Pauli measurements (LPM) and classical communication (CC) between sites holding the individual qubits. What is more, we provide a recipe to obtain the sequence of LC+LPM+CC operations which prepare the desired target state from the source state, and show how these operations can be applied in parallel to reach the target state in constant time. Our algorithm has applications in quantum networks, quantum computing, and can also serve as a design tool-for example, to find transformations between quantum error correcting codes. We provide a software implementation of our algorithm that makes this tool easier to apply. A key insight leading to our algorithm is to show that the problem is equivalent to one in graph theory, which is to decide whether some graph G ' is a vertex-minor of another graph G The vertex-minor problem is, in general, [Formula: see text]-Complete, but can be solved efficiently on graphs which are not too complex. A measure of the complexity of a graph is the rank-width which equals the Schmidt-rank width of a subclass of stabilizer states called graph states, and thus intuitively is a measure of entanglement. Here, we show that the vertex-minor problem can be solved in time O (| G | 3 ), where | G | is the size of the graph G , whenever the rank-width of G and the size of G ' are bounded. Our algorithm is based on techniques by Courcelle for solving fixed parameter tractable problems, where here the relevant fixed parameter is the rank width. The second half of this paper serves as an accessible but far from exhausting introduction to these concepts, that could be useful for many other problems in quantum information.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  13. New insight into the comparative power of quality-control rules that use control observations within a single analytical run.

    PubMed

    Parvin, C A

    1993-03-01

    The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.

  14. Graph theoretical model of a sensorimotor connectome in zebrafish.

    PubMed

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  15. Bound Electron States in Skew-symmetric Quantum Wire Intersections

    DTIC Science & Technology

    2014-01-01

    18 1.2.3 Kirchhoffs Rule for Quantum Wires . . . . . . . . . . . 19 1.3 Novel numerical methods development . . . . . . . . . . . . . 19 2...regions, though this is not as obvious as it is for bulges. CHAPTER 1. LITERATURE REVIEW 19 1.2.3 Kirchhoffs Rule for Quantum Wires One particle quantum...scattering theory on an arbitrary finite graph with n open ends and where we define the Hamiltonian to be (minus) the Laplace operator with general

  16. Scale-free effect of substitution networks

    NASA Astrophysics Data System (ADS)

    Li, Ziyu; Yu, Zhouyu; Xi, Lifeng

    2018-02-01

    In this paper, we construct the growing networks in terms of substitution rule. Roughly speaking, we replace edges of different colors with different initial graphs. Then the evolving networks are constructed. We obtained the free-scale effect of our substitution networks.

  17. Consensus pursuit of heterogeneous multi-agent systems under a directed acyclic graph

    NASA Astrophysics Data System (ADS)

    Yan, Jing; Guan, Xin-Ping; Luo, Xiao-Yuan

    2011-04-01

    This paper is concerned with the cooperative target pursuit problem by multiple agents based on directed acyclic graph. The target appears at a random location and moves only when sensed by the agents, and agents will pursue the target once they detect its existence. Since the ability of each agent may be different, we consider the heterogeneous multi-agent systems. According to the topology of the multi-agent systems, a novel consensus-based control law is proposed, where the target and agents are modeled as a leader and followers, respectively. Based on Mason's rule and signal flow graph analysis, the convergence conditions are provided to show that the agents can catch the target in a finite time. Finally, simulation studies are provided to verify the effectiveness of the proposed approach.

  18. Domain configurations in dislocations embedded hexagonal manganite systems: From the view of graph theory

    NASA Astrophysics Data System (ADS)

    Cheng, Shaobo; Zhang, Dong; Deng, Shiqing; Li, Xing; Li, Jun; Tan, Guotai; Zhu, Yimei; Zhu, Jing

    2018-04-01

    Topological defects and their interactions often arouse multiple types of emerging phenomena from edge states in Skyrmions to disclination pairs in liquid crystals. In hexagonal manganites, partial edge dislocations, a prototype topological defect, are ubiquitous and they significantly alter the topologically protected domains and their behaviors. Herein, combining electron microscopy experiment and graph theory analysis, we report a systematic study of the connections and configurations of domains in this dislocation embedded system. Rules for domain arrangement are established. The dividing line between domains, which can be attributed by the strain field of dislocations, is accurately described by a genus model from a higher dimension in the graph theory. Our results open a door for the understanding of domain patterns in topologically protected multiferroic systems.

  19. Inferring ontology graph structures using OWL reasoning.

    PubMed

    Rodríguez-García, Miguel Ángel; Hoehndorf, Robert

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  20. Reflecting on Graphs: Attributes of Graph Choice and Construction Practices in Biology.

    PubMed

    Angra, Aakanksha; Gardner, Stephanie M

    2017-01-01

    Undergraduate biology education reform aims to engage students in scientific practices such as experimental design, experimentation, and data analysis and communication. Graphs are ubiquitous in the biological sciences, and creating effective graphical representations involves quantitative and disciplinary concepts and skills. Past studies document student difficulties with graphing within the contexts of classroom or national assessments without evaluating student reasoning. Operating under the metarepresentational competence framework, we conducted think-aloud interviews to reveal differences in reasoning and graph quality between undergraduate biology students, graduate students, and professors in a pen-and-paper graphing task. All professors planned and thought about data before graph construction. When reflecting on their graphs, professors and graduate students focused on the function of graphs and experimental design, while most undergraduate students relied on intuition and data provided in the task. Most undergraduate students meticulously plotted all data with scaled axes, while professors and some graduate students transformed the data, aligned the graph with the research question, and reflected on statistics and sample size. Differences in reasoning and approaches taken in graph choice and construction corroborate and extend previous findings and provide rich targets for undergraduate and graduate instruction. © 2017 A. Angra and S. M. Gardner. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  1. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  2. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  3. Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes

    NASA Astrophysics Data System (ADS)

    Ianni, Giovambattista; Krennwallner, Thomas; Martello, Alessandra; Polleres, Axel

    RDF Schema (RDFS) as a lightweight ontology language is gaining popularity and, consequently, tools for scalable RDFS inference and querying are needed. SPARQL has become recently a W3C standard for querying RDF data, but it mostly provides means for querying simple RDF graphs only, whereas querying with respect to RDFS or other entailment regimes is left outside the current specification. In this paper, we show that SPARQL faces certain unwanted ramifications when querying ontologies in conjunction with RDF datasets that comprise multiple named graphs, and we provide an extension for SPARQL that remedies these effects. Moreover, since RDFS inference has a close relationship with logic rules, we generalize our approach to select a custom ruleset for specifying inferences to be taken into account in a SPARQL query. We show that our extensions are technically feasible by providing benchmark results for RDFS querying in our prototype system GiaBATA, which uses Datalog coupled with a persistent Relational Database as a back-end for implementing SPARQL with dynamic rule-based inference. By employing different optimization techniques like magic set rewriting our system remains competitive with state-of-the-art RDFS querying systems.

  4. Canonic FFT flow graphs for real-valued even/odd symmetric inputs

    NASA Astrophysics Data System (ADS)

    Lao, Yingjie; Parhi, Keshab K.

    2017-12-01

    Canonic real-valued fast Fourier transform (RFFT) has been proposed to reduce the arithmetic complexity by eliminating redundancies. In a canonic N-point RFFT, the number of signal values at each stage is canonic with respect to the number of signal values, i.e., N. The major advantage of the canonic RFFTs is that these require the least number of butterfly operations and only real datapaths when mapped to architectures. In this paper, we consider the FFT computation whose inputs are not only real but also even/odd symmetric, which indeed lead to the well-known discrete cosine and sine transforms (DCTs and DSTs). Novel algorithms for generating the flow graphs of canonic RFFTs with even/odd symmetric inputs are proposed. It is shown that the proposed algorithms lead to canonic structures with N/2 +1 signal values at each stage for an N-point real even symmetric FFT (REFFT) or N/2 -1 signal values at each stage for an N-point RFFT real odd symmetric FFT (ROFFT). In order to remove butterfly operations, several twiddle factor transformations are proposed in this paper. We also discuss the design of canonic REFFT for any composite length. Performances of the canonic REFFT/ROFFT are also discussed. It is shown that the flow graph of canonic REFFT/ROFFT has less number of interconnections, less butterfly operations, and less twiddle factor operations, compared to prior works.

  5. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  6. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  7. AGM: A DSL for mobile cloud computing based on directed graph

    NASA Astrophysics Data System (ADS)

    Tanković, Nikola; Grbac, Tihana Galinac

    2016-06-01

    This paper summarizes a novel approach for consuming a domain specific language (DSL) by transforming it to a directed graph representation persisted by a graph database. Using such specialized database enables advanced navigation trough the stored model exposing only relevant subsets of meta-data to different involved services and components. We applied this approach in a mobile cloud computing system and used it to model several mobile applications in retail, supply chain management and merchandising domain. These application are distributed in a Software-as-a-Service (SaaS) fashion and used by thousands of customers in Croatia. We report on lessons learned and propose further research on this topic.

  8. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  9. A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.

    PubMed

    Hu, Shoubo; Chen, Zhitang; Chan, Laiwan

    2018-05-01

    Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.

  10. Adaptive random walks on the class of Web graphs

    NASA Astrophysics Data System (ADS)

    Tadić, B.

    2001-09-01

    We study random walk with adaptive move strategies on a class of directed graphs with variable wiring diagram. The graphs are grown from the evolution rules compatible with the dynamics of the world-wide Web [B. Tadić, Physica A 293, 273 (2001)], and are characterized by a pair of power-law distributions of out- and in-degree for each value of the parameter β, which measures the degree of rewiring in the graph. The walker adapts its move strategy according to locally available information both on out-degree of the visited node and in-degree of target node. A standard random walk, on the other hand, uses the out-degree only. We compute the distribution of connected subgraphs visited by an ensemble of walkers, the average access time and survival probability of the walks. We discuss these properties of the walk dynamics relative to the changes in the global graph structure when the control parameter β is varied. For β≥ 3, corresponding to the world-wide Web, the access time of the walk to a given level of hierarchy on the graph is much shorter compared to the standard random walk on the same graph. By reducing the amount of rewiring towards rigidity limit β↦βc≲ 0.1, corresponding to the range of naturally occurring biochemical networks, the survival probability of adaptive and standard random walk become increasingly similar. The adaptive random walk can be used as an efficient message-passing algorithm on this class of graphs for large degree of rewiring.

  11. Circuitbot

    DTIC Science & Technology

    2016-03-01

    constraints problem. Game rules described valid moves allowing player to generate a memory graph performing improved C program verification . 15. SUBJECT...TERMS Formal Verification , Static Analysis, Abstract Interpretation, Pointer Analysis, Fixpoint Iteration 16. SECURITY CLASSIFICATION OF: 17...36 3.4.12 Example: Game Play . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.13 Verification

  12. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  13. CONSTRUCTING AND DERIVING RECIPROCAL TRIGONOMETRIC RELATIONS: A FUNCTIONAL ANALYTIC APPROACH

    PubMed Central

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires. PMID:19949509

  14. Constructing and deriving reciprocal trigonometric relations: a functional analytic approach.

    PubMed

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires.

  15. Domain configurations in dislocations embedded hexagonal manganite systems: From the view of graph theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Shaobo; Zhang, Dong; Deng, Shiqing

    Topological defects and their interactions often arouse multiple types of emerging phenomena from edge states in Skyrmions to disclination pairs in liquid crystals. In hexagonal manganites, partial edge dislocations, a prototype topological defect, are ubiquitous and they significantly alter the topologically protected domains and their behaviors. In this work, combining electron microscopy experiment and graph theory analysis, we report a systematic study of the connections and configurations of domains in this dislocation embedded system. Rules for domain arrangement are established. The dividing line between domains, which can be attributed by the strain field of dislocations, is accurately described by amore » genus model from a higher dimension in the graph theory. In conclusion, our results open a door for the understanding of domain patterns in topologically protected multiferroic systems.« less

  16. How mutation affects evolutionary games on graphs

    PubMed Central

    Allen, Benjamin; Traulsen, Arne; Tarnita, Corina E.; Nowak, Martin A.

    2011-01-01

    Evolutionary dynamics are affected by population structure, mutation rates and update rules. Spatial or network structure facilitates the clustering of strategies, which represents a mechanism for the evolution of cooperation. Mutation dilutes this effect. Here we analyze how mutation influences evolutionary clustering on graphs. We introduce new mathematical methods to evolutionary game theory, specifically the analysis of coalescing random walks via generating functions. These techniques allow us to derive exact identity-by-descent (IBD) probabilities, which characterize spatial assortment on lattices and Cayley trees. From these IBD probabilities we obtain exact conditions for the evolution of cooperation and other game strategies, showing the dual effects of graph topology and mutation rate. High mutation rates diminish the clustering of cooperators, hindering their evolutionary success. Our model can represent either genetic evolution with mutation, or social imitation processes with random strategy exploration. PMID:21473871

  17. Domain configurations in dislocations embedded hexagonal manganite systems: From the view of graph theory

    DOE PAGES

    Cheng, Shaobo; Zhang, Dong; Deng, Shiqing; ...

    2018-04-19

    Topological defects and their interactions often arouse multiple types of emerging phenomena from edge states in Skyrmions to disclination pairs in liquid crystals. In hexagonal manganites, partial edge dislocations, a prototype topological defect, are ubiquitous and they significantly alter the topologically protected domains and their behaviors. In this work, combining electron microscopy experiment and graph theory analysis, we report a systematic study of the connections and configurations of domains in this dislocation embedded system. Rules for domain arrangement are established. The dividing line between domains, which can be attributed by the strain field of dislocations, is accurately described by amore » genus model from a higher dimension in the graph theory. In conclusion, our results open a door for the understanding of domain patterns in topologically protected multiferroic systems.« less

  18. An overview of quality control practices in Ontario with particular reference to cholesterol analysis.

    PubMed

    Krishnan, S; Webb, S; Henderson, A R; Cheung, C M; Nazir, D J; Richardson, H

    1999-03-01

    The Laboratory Proficiency Testing Program (LPTP) assesses the analytical performance of all licensed laboratories in Ontario. The LPTP Enzymes, Cardiac Markers, and Lipids Committee conducted a "Patterns of Practice" survey to assess the in-house quality control (QC) practices of laboratories in Ontario using cholesterol as the QC paradigm. The survey was questionnaire-based seeking information on statistical calculations, software rules, review process and data retention, and so on. Copies of the in-house cholesterol QC graphs were requested. A total of 120 of 210 laboratories were randomly chosen to receive the questionnaires during 1995 and 1996; 115 laboratories responded, although some did not answer all questions. The majority calculate means and standard deviations (SD) every month, using anywhere from 4 to >100 data points. 65% use a fixed mean and SD, while 17% use means calculated from the previous month. A few use a floating or cumulative mean. Some laboratories that do not use fixed means use a fixed SD. About 90% use some form of statistical quality control rules. The most common rules used to detect random error are 1(3s)/R4s while 2(2s)/4(1s)/10x are used for systematic errors. About 20% did not assay any QC at levels >5.5 mmol/L. Quality control data are reviewed daily (technologists), weekly and monthly (supervisors/directors). Most laboratories retain their QC records for up to 3 years on paper and magnetic media. On some QC graphs the mean and SD, QC product lot number, or reference to action logs are not apparent. Quality control practices in Ontario are, therefore, disappointing. Improvement is required in the use of clinically appropriate concentrations of QC material and documentation on QC graphs.

  19. Sequential visibility-graph motifs

    NASA Astrophysics Data System (ADS)

    Iacovacci, Jacopo; Lacasa, Lucas

    2016-04-01

    Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between nonlinear dynamics and network science. In this work we introduce and study the concept of sequential visibility-graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies. We develop a theory to compute in an exact way the motif profiles associated with general classes of deterministic and stochastic dynamics. We find that this simple property is indeed a highly informative and computationally efficient feature capable of distinguishing among different dynamics and robust against noise contamination. We finally confirm that it can be used in practice to perform unsupervised learning, by extracting motif profiles from experimental heart-rate series and being able, accordingly, to disentangle meditative from other relaxation states. Applications of this general theory include the automatic classification and description of physical, biological, and financial time series.

  20. Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima

    2015-05-15

    Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cellmore » lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0.77), mean heart dose (0.69), and a pre-to-midtreatment change in ACE (0.66). When RP prediction was made only with pretreatment information, the AUC ranged from 0.76 to 0.81 depending on the ensemble size. Bootstrap validation of graph features in the ensemble quantified confidence of association between variables in the graphs where ten interactions were statistically significant. Conclusions: The presented BN methodology provides the flexibility to model hierarchical interactions between RP covariates, which is applied to probabilistic inference on RP. The authors’ preliminary results demonstrate that such framework combined with an ensemble method can possibly improve prediction of RP under real-life clinical circumstances such as missing data or treatment plan adaptation.« less

  1. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  2. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  3. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    NASA Astrophysics Data System (ADS)

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  4. Using graph approach for managing connectivity in integrative landscape modelling

    NASA Astrophysics Data System (ADS)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). OpenFLUID-landr library has been developed in order i) to be used with no GIS expert skills needed (common gis formats can be read and simplified spatial management is provided), ii) to easily develop adapted rules of landscape discretization and graph creation to follow spatialized model requirements and iii) to allow model developers to manage dynamic and complex spatial topology. Graph management in OpenFLUID are shown with i) examples of hydrological modelizations on complex farmed landscapes and ii) the new implementation of Geo-MHYDAS tool based on the OpenFLUID-landr library, which allows to discretize a landscape and create graph structure for the MHYDAS model requirements.

  5. Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.

    PubMed

    Lee, Donghwa; Myung, Hyun

    2014-07-11

    In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.

  6. Dynamics of Nearest-Neighbour Competitions on Graphs

    NASA Astrophysics Data System (ADS)

    Rador, Tonguç

    2017-10-01

    Considering a collection of agents representing the vertices of a graph endowed with integer points, we study the asymptotic dynamics of the rate of the increase of their points according to a very simple rule: we randomly pick an an edge from the graph which unambiguously defines two agents we give a point the the agent with larger point with probability p and to the lagger with probability q such that p+q=1. The model we present is the most general version of the nearest-neighbour competition model introduced by Ben-Naim, Vazquez and Redner. We show that the model combines aspects of hyperbolic partial differential equations—as that of a conservation law—graph colouring and hyperplane arrangements. We discuss the properties of the model for general graphs but we confine in depth study to d-dimensional tori. We present a detailed study for the ring graph, which includes a chemical potential approximation to calculate all its statistics that gives rather accurate results. The two-dimensional torus, not studied in depth as the ring, is shown to possess critical behaviour in that the asymptotic speeds arrange themselves in two-coloured islands separated by borders of three other colours and the size of the islands obey power law distribution. We also show that in the large d limit the d-dimensional torus shows inverse sine law for the distribution of asymptotic speeds.

  7. Transformation of Arden Syntax's medical logic modules into ArdenML for a business rules management system.

    PubMed

    Jung, Chai Young; Choi, Jong-Ye; Jeong, Seong Jik; Cho, Kyunghee; Koo, Yong Duk; Bae, Jin Hee; Kim, Sukil

    2016-05-16

    Arden Syntax is a Health Level Seven International (HL7) standard language that is used for representing medical knowledge as logic statements. Arden Syntax Markup Language (ArdenML) is a new representation of Arden Syntax based on XML. Compilers are required to execute medical logic modules (MLMs) in the hospital environment. However, ArdenML may also replace the compiler. The purpose of this study is to demonstrate that MLMs, encoded in ArdenML, can be transformed into a commercial rule engine format through an XSLT stylesheet and made executable in a target system. The target rule engine selected was Blaze Advisor. We developed an XSLT stylesheet to transform MLMs in ArdenML into Structured Rules Language (SRL) in Blaze Advisor, through a comparison of syntax between the two languages. The stylesheet was then refined recursively, by building and applying rules collected from the billing and coding guidelines of the Korean health insurance service. Two nurse coders collected and verified the rules and two information technology (IT) specialists encoded the MLMs and built the XSLT stylesheet. Finally, the stylesheet was validated by importing the MLMs into Blaze Advisor and applying them to claims data. The language comparison revealed that Blaze Advisor requires the declaration of variables with explicit types. We used both integer and real numbers for numeric types in ArdenML. "IF∼THEN" statements and assignment statements in ArdenML become rules in Blaze Advisor. We designed an XSLT stylesheet to solve this issue. In addition, we maintained the order of rule execution in the transformed rules, and added two small programs to support variable declarations and action statements. A total of 1489 rules were reviewed during this study, of which 324 rules were collected. We removed duplicate rules and encoded 241 unique MLMs in ArdenML, which were successfully transformed into SRL and imported to Blaze Advisor via the XSLT stylesheet. When applied to 73,841 outpatients' insurance claims data, the review result was the same as that of the legacy system. We have demonstrated that ArdenML can replace a compiler for transforming MLMs into commercial rule engine format. While the proposed XSLT stylesheet requires refinement for general use, we anticipate that the development of further XSLT stylesheets will support various rule engines. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  9. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less

  10. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    PubMed Central

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  11. Modeling for (physical) biologists: an introduction to the rule-based approach

    PubMed Central

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-01-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  12. Ellipsoidal fuzzy learning for smart car platoons

    NASA Astrophysics Data System (ADS)

    Dickerson, Julie A.; Kosko, Bart

    1993-12-01

    A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.

  13. 16 CFR 1025.33 - Production of documents and things.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Section 1025.33 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL RULES OF PRACTICE FOR... (including writings, drawings, graphs, charts, photographs, phono-records, and any other data compilation..., custody, or control of the party upon whom the request is served, or (2) To permit entry upon designated...

  14. Fact Sheets and Additional Information Regarding the 2006 Particulate Matter (PM) National Ambient Air Quality Standards (NAAQS)

    EPA Pesticide Factsheets

    This page contains a fact sheet, a presentation providing an overview of the rule, and graphs and maps pertaining to the new standards that are supplementary to the October 2006 revision for the Particulate Matter (PM) NAAQS

  15. Descriptive statistics and spatial distributions of geochemical variables associated with manganese oxide-rich phases in the northern Pacific

    USGS Publications Warehouse

    Botbol, Joseph Moses; Evenden, Gerald Ian

    1989-01-01

    Tables, graphs, and maps are used to portray the frequency characteristics and spatial distribution of manganese oxide-rich phase geochemical data, to characterize the northern Pacific in terms of publicly available nodule geochemical data, and to develop data portrayal methods that will facilitate data analysis. Source data are a subset of the Scripps Institute of Oceanography's Sediment Data Bank. The study area is bounded by 0° N., 40° N., 120° E., and 100° W. and is arbitrarily subdivided into 14-20°x20° geographic subregions. Frequency distributions of trace metals characterized in the original raw data are graphed as ogives, and salient parameters are tabulated. All variables are transformed to enrichment values relative to median concentration within their host subregions. Scatter plots of all pairs of original variables and their enrichment transforms are provided as an aid to the interpretation of correlations between variables. Gridded spatial distributions of all variables are portrayed as gray-scale maps. The use of tables and graphs to portray frequency statistics and gray-scale maps to portray spatial distributions is an effective way to prepare for and facilitate multivariate data analysis.

  16. Parallel solution of closely coupled systems

    NASA Technical Reports Server (NTRS)

    Utku, S.; Salama, M.

    1986-01-01

    The odd-even permutation and associated unitary transformations for reordering the matrix coefficient A are employed as means of breaking the strong seriality which is characteristic of closely coupled systems. The nested dissection technique is also reviewed, and the equivalence between reordering A and dissecting its network is established. The effect of transforming A with odd-even permutation on its topology and the topology of its Cholesky factors is discussed. This leads to the construction of directed graphs showing the computational steps required for factoring A, their precedence relationships and their sequential and concurrent assignment to the available processors. Expressions for the speed-up and efficiency of using N processors in parallel relative to the sequential use of a single processor are derived from the directed graph. Similar expressions are also derived when the number of available processors is fewer than required.

  17. Segmentation of touching handwritten Japanese characters using the graph theory method

    NASA Astrophysics Data System (ADS)

    Suwa, Misako

    2000-12-01

    Projection analysis methods have been widely used to segment Japanese character strings. However, if adjacent characters have overhanging strokes or a touching point doesn't correspond to the histogram minimum, the methods are prone to result in errors. In contrast, non-projection analysis methods being proposed for use on numerals or alphabet characters cannot be simply applied for Japanese characters because of the differences in the structure of the characters. Based on the oversegmenting strategy, a new pre-segmentation method is presented in this paper: touching patterns are represented as graphs and touching strokes are regarded as the elements of proper edge cutsets. By using the graph theoretical technique, the cutset martrix is calculated. Then, by applying pruning rules, potential touching strokes are determined and the patterns are over segmented. Moreover, this algorithm was confirmed to be valid for touching patterns with overhanging strokes and doubly connected patterns in simulations.

  18. Data and graph interpretation practices among preservice science teachers

    NASA Astrophysics Data System (ADS)

    Bowen, G. Michael; Roth, Wolff-Michael

    2005-12-01

    The interpretation of data and construction and interpretation of graphs are central practices in science, which, according to recent reform documents, science and mathematics teachers are expected to foster in their classrooms. However, are (preservice) science teachers prepared to teach inquiry with the purpose of transforming and analyzing data, and interpreting graphical representations? That is, are preservice science teachers prepared to teach data analysis and graph interpretation practices that scientists use by default in their everyday work? The present study was designed to answer these and related questions. We investigated the responses of preservice elementary and secondary science teachers to data and graph interpretation tasks. Our investigation shows that, despite considerable preparation, and for many, despite bachelor of science degrees, preservice teachers do not enact the (authentic) practices that scientists routinely do when asked to interpret data or graphs. Detailed analyses are provided of what data and graph interpretation practices actually were enacted. We conclude that traditional schooling emphasizes particular beliefs in the mathematical nature of the universe that make it difficult for many individuals to deal with data possessing the random variation found in measurements of natural phenomena. The results suggest that preservice teachers need more experience in engaging in data and graph interpretation practices originating in activities that provide the degree of variation in and complexity of data present in realistic investigations.

  19. From data towards knowledge: revealing the architecture of signaling systems by unifying knowledge mining and data mining of systematic perturbation data.

    PubMed

    Lu, Songjian; Jin, Bo; Cowart, L Ashley; Lu, Xinghua

    2013-01-01

    Genetic and pharmacological perturbation experiments, such as deleting a gene and monitoring gene expression responses, are powerful tools for studying cellular signal transduction pathways. However, it remains a challenge to automatically derive knowledge of a cellular signaling system at a conceptual level from systematic perturbation-response data. In this study, we explored a framework that unifies knowledge mining and data mining towards the goal. The framework consists of the following automated processes: 1) applying an ontology-driven knowledge mining approach to identify functional modules among the genes responding to a perturbation in order to reveal potential signals affected by the perturbation; 2) applying a graph-based data mining approach to search for perturbations that affect a common signal; and 3) revealing the architecture of a signaling system by organizing signaling units into a hierarchy based on their relationships. Applying this framework to a compendium of yeast perturbation-response data, we have successfully recovered many well-known signal transduction pathways; in addition, our analysis has led to many new hypotheses regarding the yeast signal transduction system; finally, our analysis automatically organized perturbed genes as a graph reflecting the architecture of the yeast signaling system. Importantly, this framework transformed molecular findings from a gene level to a conceptual level, which can be readily translated into computable knowledge in the form of rules regarding the yeast signaling system, such as "if genes involved in the MAPK signaling are perturbed, genes involved in pheromone responses will be differentially expressed."

  20. Numerical Estimation of Information Theoretic Measures for Large Data Sets

    DTIC Science & Technology

    2013-01-30

    probability including a new indifference rule,” J. Inst. of Actuaries Students’ Soc. 73, 285–334 (1947). 7. M. Hutter and M. Zaffalon, “Distribution...Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, New York (1972). 13. K.B. Oldham et al., An Atlas

  1. Linking Models: Reasoning from Patterns to Tables and Equations

    ERIC Educational Resources Information Center

    Switzer, J. Matt

    2013-01-01

    Patterns are commonly used in middle years mathematics classrooms to teach students about functions and modelling with tables, graphs, and equations. Grade 6 students are expected to, "continue and create sequences involving whole numbers, fractions and decimals," and "describe the rule used to create the sequence." (Australian…

  2. Recently Developed Formulations of the Inverse Problem in Acoustics and Electromagnetics

    DTIC Science & Technology

    1974-12-01

    solution for scattering by a sphere. The inverse transform of irs?(K) is calculated, this function yielding --y (x). Figure 4.2 is a graph of this...time or decays "sufficiently rapidly", then T+- o. In this case, we may let T -1 in (8.9) and obtain the inverse transform (k = w/c) of (5.6) as the

  3. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems

    PubMed Central

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of “ODEs and formalized flow diagrams” as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler’s behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features. PMID:27270918

  4. Design Options for Multimodal Web Applications

    NASA Astrophysics Data System (ADS)

    Stanciulescu, Adrian; Vanderdonckt, Jean

    The capabilities of multimodal applications running on the web are well de-lineated since they are mainly constrained by what their underlying standard mark up language offers, as opposed to hand-made multimodal applications. As the experience in developing such multimodal web applications is growing, the need arises to identify and define major design options of such application to pave the way to a structured development life cycle. This paper provides a design space of independent design options for multimodal web applications based on three types of modalities: graphical, vocal, tactile, and combined. On the one hand, these design options may provide designers with some explicit guidance on what to decide or not for their future user interface, while exploring various design alternatives. On the other hand, these design options have been implemented as graph transformations per-formed on a user interface model represented as a graph. Thanks to a transformation engine, it allows designers to play with the different values of each design option, to preview the results of the transformation, and to obtain the corresponding code on-demand

  5. Constructing a Graph Database for Semantic Literature-Based Discovery.

    PubMed

    Hristovski, Dimitar; Kastrin, Andrej; Dinevski, Dejan; Rindflesch, Thomas C

    2015-01-01

    Literature-based discovery (LBD) generates discoveries, or hypotheses, by combining what is already known in the literature. Potential discoveries have the form of relations between biomedical concepts; for example, a drug may be determined to treat a disease other than the one for which it was intended. LBD views the knowledge in a domain as a network; a set of concepts along with the relations between them. As a starting point, we used SemMedDB, a database of semantic relations between biomedical concepts extracted with SemRep from Medline. SemMedDB is distributed as a MySQL relational database, which has some problems when dealing with network data. We transformed and uploaded SemMedDB into the Neo4j graph database, and implemented the basic LBD discovery algorithms with the Cypher query language. We conclude that storing the data needed for semantic LBD is more natural in a graph database. Also, implementing LBD discovery algorithms is conceptually simpler with a graph query language when compared with standard SQL.

  6. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  7. Classification of pregnancy and labor contractions using a graph theory based analysis.

    PubMed

    Nader, N; Hassan, M; Falou, W; Diab, A; Al-Omar, S; Khalil, M; Marque, C

    2015-08-01

    In this paper, we propose a new framework to characterize the electrohysterographic (EHG) signals recorded during pregnancy and labor. The approach is based on the analysis of the propagation of the uterine electrical activity. The processing pipeline includes i) the estimation of the statistical dependencies between the different recorded EHG signals, ii) the characterization of the obtained connectivity matrices using network measures and iii) the use of these measures in clinical application: the classification between pregnancy and labor. Due to its robustness to volume conductor, we used the imaginary part of coherence in order to produce the connectivity matrix which is then transformed into a graph. We evaluate the performance of several graph measures. We also compare the results with the parameter mostly used in the literature: the peak frequency combined with the propagation velocity (PV +PF). Our results show that the use of the network measures is a promising tool to classify labor and pregnancy contractions with a small superiority of the graph strength over PV+PF.

  8. Transformation of Graphical ECA Policies into Executable PonderTalk Code

    NASA Astrophysics Data System (ADS)

    Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard

    Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.

  9. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different

  10. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  11. Managing changes in distributed biomedical ontologies using hierarchical distributed graph transformation.

    PubMed

    Shaban-Nejad, Arash; Haarslev, Volker

    2015-01-01

    The issue of ontology evolution and change management is inadequately addressed by available tools and algorithms, mostly due to the lack of suitable knowledge representation formalisms to deal with temporal abstract notations and the overreliance on human factors. Also most of the current approaches have been focused on changes within the internal structure of ontologies and interactions with other existing ontologies have been widely neglected. In our research, after revealing and classifying some of the common alterations in a number of popular biomedical ontologies, we present a novel agent-based framework, Represent, Legitimate and Reproduce (RLR), to semi-automatically manage the evolution of bio-ontologies, with emphasis on the FungalWeb Ontology, with minimal human intervention. RLR assists and guides ontology engineers through the change management process in general and aids in tracking and representing the changes, particularly through the use of category theory and hierarchical graph transformation.

  12. Born approximation in linear-time invariant system

    NASA Astrophysics Data System (ADS)

    Gumjudpai, Burin

    2017-09-01

    An alternative way of finding the LTI’s solution with the Born approximation, is investigated. We use Born approximation in the LTI and in the transformed LTI in form of Helmholtz equation. General solution are considered as infinite series or Feynman graph. Slow-roll approximation are explored. Transforming the LTI system into Helmholtz equation, approximated general solution can be found for any given forms of force with its initial value.

  13. The Roadmaker's algorithm for the discrete pulse transform.

    PubMed

    Laurie, Dirk P

    2011-02-01

    The discrete pulse transform (DPT) is a decomposition of an observed signal into a sum of pulses, i.e., signals that are constant on a connected set and zero elsewhere. Originally developed for 1-D signal processing, the DPT has recently been generalized to more dimensions. Applications in image processing are currently being investigated. The time required to compute the DPT as originally defined via the successive application of LULU operators (members of a class of minimax filters studied by Rohwer) has been a severe drawback to its applicability. This paper introduces a fast method for obtaining such a decomposition, called the Roadmaker's algorithm because it involves filling pits and razing bumps. It acts selectively only on those features actually present in the signal, flattening them in order of increasing size by subtracing an appropriate positive or negative pulse, which is then appended to the decomposition. The implementation described here covers 1-D signal as well as two and 3-D image processing in a single framework. This is achieved by considering the signal or image as a function defined on a graph, with the geometry specified by the edges of the graph. Whenever a feature is flattened, nodes in the graph are merged, until eventually only one node remains. At that stage, a new set of edges for the same nodes as the graph, forming a tree structure, defines the obtained decomposition. The Roadmaker's algorithm is shown to be equivalent to the DPT in the sense of obtaining the same decomposition. However, its simpler operators are not in general equivalent to the LULU operators in situations where those operators are not applied successively. A by-product of the Roadmaker's algorithm is that it yields a proof of the so-called Highlight Conjecture, stated as an open problem in 2006. We pay particular attention to algorithmic details and complexity, including a demonstration that in the 1-D case, and also in the case of a complete graph, the Roadmaker's algorithm has optimal complexity: it runs in time O(m), where m is the number of arcs in the graph.

  14. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  15. 46 CFR 502.206 - Production of documents and things and entry upon land for inspection and other purposes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... GENERAL AND ADMINISTRATIVE PROVISIONS RULES OF PRACTICE AND PROCEDURE Depositions, Written Interrogatories... inspect and copy any designated documents (including writings, drawings, graphs, charts, photographs... of § 502.203(a) and which are in the possession, custody or control of the party upon whom the...

  16. The Multi-Disciplinary Graduate Program in Educational Research. Final Report, Part II; Methodoloqical Trilogy.

    ERIC Educational Resources Information Center

    Lazarsfeld, Paul F., Ed.

    Part two of a seven-section, final report on the Multi-Disciplinary Graduate Program in Educational Research, this document contains discussions of quantification and reason analysis. Quantification is presented as a language consisting of sentences (graphs and tables), words, (classificatory instruments), and grammar (rules for constructing and…

  17. Algebraic Activities Aid Discovery Lessons

    ERIC Educational Resources Information Center

    Wallace-Gomez, Patricia

    2013-01-01

    After a unit on the rules for positive and negative numbers and the order of operations for evaluating algebraic expressions, many students believe that they understand these principles well enough, but they really do not. They clearly need more practice, but not more of the same kind of drill. Wallace-Gomez provides three graphing activities that…

  18. Analyzing and synthesizing phylogenies using tree alignment graphs.

    PubMed

    Smith, Stephen A; Brown, Joseph W; Hinchliff, Cody E

    2013-01-01

    Phylogenetic trees are used to analyze and visualize evolution. However, trees can be imperfect datatypes when summarizing multiple trees. This is especially problematic when accommodating for biological phenomena such as horizontal gene transfer, incomplete lineage sorting, and hybridization, as well as topological conflict between datasets. Additionally, researchers may want to combine information from sets of trees that have partially overlapping taxon sets. To address the problem of analyzing sets of trees with conflicting relationships and partially overlapping taxon sets, we introduce methods for aligning, synthesizing and analyzing rooted phylogenetic trees within a graph, called a tree alignment graph (TAG). The TAG can be queried and analyzed to explore uncertainty and conflict. It can also be synthesized to construct trees, presenting an alternative to supertrees approaches. We demonstrate these methods with two empirical datasets. In order to explore uncertainty, we constructed a TAG of the bootstrap trees from the Angiosperm Tree of Life project. Analysis of the resulting graph demonstrates that areas of the dataset that are unresolved in majority-rule consensus tree analyses can be understood in more detail within the context of a graph structure, using measures incorporating node degree and adjacency support. As an exercise in synthesis (i.e., summarization of a TAG constructed from the alignment trees), we also construct a TAG consisting of the taxonomy and source trees from a recent comprehensive bird study. We synthesized this graph into a tree that can be reconstructed in a repeatable fashion and where the underlying source information can be updated. The methods presented here are tractable for large scale analyses and serve as a basis for an alternative to consensus tree and supertree methods. Furthermore, the exploration of these graphs can expose structures and patterns within the dataset that are otherwise difficult to observe.

  19. Analyzing and Synthesizing Phylogenies Using Tree Alignment Graphs

    PubMed Central

    Smith, Stephen A.; Brown, Joseph W.; Hinchliff, Cody E.

    2013-01-01

    Phylogenetic trees are used to analyze and visualize evolution. However, trees can be imperfect datatypes when summarizing multiple trees. This is especially problematic when accommodating for biological phenomena such as horizontal gene transfer, incomplete lineage sorting, and hybridization, as well as topological conflict between datasets. Additionally, researchers may want to combine information from sets of trees that have partially overlapping taxon sets. To address the problem of analyzing sets of trees with conflicting relationships and partially overlapping taxon sets, we introduce methods for aligning, synthesizing and analyzing rooted phylogenetic trees within a graph, called a tree alignment graph (TAG). The TAG can be queried and analyzed to explore uncertainty and conflict. It can also be synthesized to construct trees, presenting an alternative to supertrees approaches. We demonstrate these methods with two empirical datasets. In order to explore uncertainty, we constructed a TAG of the bootstrap trees from the Angiosperm Tree of Life project. Analysis of the resulting graph demonstrates that areas of the dataset that are unresolved in majority-rule consensus tree analyses can be understood in more detail within the context of a graph structure, using measures incorporating node degree and adjacency support. As an exercise in synthesis (i.e., summarization of a TAG constructed from the alignment trees), we also construct a TAG consisting of the taxonomy and source trees from a recent comprehensive bird study. We synthesized this graph into a tree that can be reconstructed in a repeatable fashion and where the underlying source information can be updated. The methods presented here are tractable for large scale analyses and serve as a basis for an alternative to consensus tree and supertree methods. Furthermore, the exploration of these graphs can expose structures and patterns within the dataset that are otherwise difficult to observe. PMID:24086118

  20. Markov chain aggregation and its applications to combinatorial reaction networks.

    PubMed

    Ganguly, Arnab; Petrov, Tatjana; Koeppl, Heinz

    2014-09-01

    We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.

  1. Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xuanhua; Luo, Xuan; Liang, Junling

    GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and datasets, Frog is able to significantly outperform existing GPU-based graph processing systems except Gunrock and MapGraph. MapGraph gets better performance than Frog when running BFS on RoadNet-CA. The comparison between Gunrock and Frog is inconclusive. Frog can outperform Gunrock more than 1.04X when running PageRank and SSSP, while the advantage of Frog is not obvious when running BFS and CC on some datasets especially for RoadNet-CA.« less

  2. Automatic segmentation of pulmonary fissures in x-ray CT images using anatomic guidance

    NASA Astrophysics Data System (ADS)

    Ukil, Soumik; Sonka, Milan; Reinhardt, Joseph M.

    2006-03-01

    The pulmonary lobes are the five distinct anatomic divisions of the human lungs. The physical boundaries between the lobes are called the lobar fissures. Detection of lobar fissure positions in pulmonary X-ray CT images is of increasing interest for the early detection of pathologies, and also for the regional functional analysis of the lungs. We have developed a two-step automatic method for the accurate segmentation of the three pulmonary fissures. In the first step, an approximation of the actual fissure locations is made using a 3-D watershed transform on the distance map of the segmented vasculature. Information from the anatomically labeled human airway tree is used to guide the watershed segmentation. These approximate fissure boundaries are then used to define the region of interest (ROI) for a more exact 3-D graph search to locate the fissures. Within the ROI the fissures are enhanced by computing a ridgeness measure, and this is used as the cost function for the graph search. The fissures are detected as the optimal surface within the graph defined by the cost function, which is computed by transforming the problem to the problem of finding a minimum s-t cut on a derived graph. The accuracy of the lobar borders is assessed by comparing the automatic results to manually traced lobe segments. The mean distance error between manually traced and computer detected left oblique, right oblique and right horizontal fissures is 2.3 +/- 0.8 mm, 2.3 +/- 0.7 mm and 1.0 +/- 0.1 mm, respectively.

  3. Large-scale automated histology in the pursuit of connectomes.

    PubMed

    Kleinfeld, David; Bharioke, Arjun; Blinder, Pablo; Bock, Davi D; Briggman, Kevin L; Chklovskii, Dmitri B; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P; Lee, Wei-Chung Allen; Meyer, Hanno S; Micheva, Kristina D; Oberlaender, Marcel; Prohaska, Steffen; Reid, R Clay; Smith, Stephen J; Takemura, Shinya; Tsai, Philbert S; Sakmann, Bert

    2011-11-09

    How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.

  4. Large-Scale Automated Histology in the Pursuit of Connectomes

    PubMed Central

    Bharioke, Arjun; Blinder, Pablo; Bock, Davi D.; Briggman, Kevin L.; Chklovskii, Dmitri B.; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P.; Lee, Wei-Chung Allen; Meyer, Hanno S.; Micheva, Kristina D.; Oberlaender, Marcel; Prohaska, Steffen; Reid, R. Clay; Smith, Stephen J.; Takemura, Shinya; Tsai, Philbert S.; Sakmann, Bert

    2011-01-01

    How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity. PMID:22072665

  5. Temporal dynamics and impact of event interactions in cyber-social populations

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Qing; Li, Xiang

    2013-03-01

    The advance of information technologies provides powerful measures to digitize social interactions and facilitate quantitative investigations. To explore large-scale indoor interactions of a social population, we analyze 18 715 users' Wi-Fi access logs recorded in a Chinese university campus during 3 months, and define event interaction (EI) to characterize the concurrent interactions of multiple users inferred by their geographic coincidences—co-locating in the same small region at the same time. We propose three rules to construct a transmission graph, which depicts the topological and temporal features of event interactions. The vertex dynamics of transmission graph tells that the active durations of EIs fall into the truncated power-law distributions, which is independent on the number of involved individuals. The edge dynamics of transmission graph reports that the transmission durations present a truncated power-law pattern independent on the daily and weekly periodicities. Besides, in the aggregated transmission graph, low-degree vertices previously neglected in the aggregated static networks may participate in the large-degree EIs, which is verified by three data sets covering different sizes of social populations with various rendezvouses. This work highlights the temporal significance of event interactions in cyber-social populations.

  6. An analysis of multi-type relational interactions in FMA using graph motifs with disjointness constraints.

    PubMed

    Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S

    2012-01-01

    The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation.

  7. An Analysis of Multi-type Relational Interactions in FMA Using Graph Motifs with Disjointness Constraints

    PubMed Central

    Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S

    2012-01-01

    The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation. PMID:23304382

  8. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  9. SPARQLog: SPARQL with Rules and Quantification

    NASA Astrophysics Data System (ADS)

    Bry, François; Furche, Tim; Marnette, Bruno; Ley, Clemens; Linse, Benedikt; Poppe, Olga

    SPARQL has become the gold-standard for RDF query languages. Nevertheless, we believe there is further room for improving RDF query languages. In this chapter, we investigate the addition of rules and quantifier alternation to SPARQL. That extension, called SPARQLog, extends previous RDF query languages by arbitrary quantifier alternation: blank nodes may occur in the scope of all, some, or none of the universal variables of a rule. In addition, SPARQLog is aware of important RDF features such as the distinction between blank nodes, literals and IRIs or the RDFS vocabulary. The semantics of SPARQLog is closed (every answer is an RDF graph), but lifts RDF's restrictions on literal and blank node occurrences for intermediary data. We show how to define a sound and complete operational semantics that can be implemented using existing logic programming techniques. While SPARQLog is Turing complete, we identify a decidable (in fact, polynomial time) fragment SwARQLog ensuring polynomial data-complexity inspired from the notion of super-weak acyclicity in data exchange. Furthermore, we prove that SPARQLog with no universal quantifiers in the scope of existential ones (∀ ∃ fragment) is equivalent to full SPARQLog in presence of graph projection. Thus, the convenience of arbitrary quantifier alternation comes, in fact, for free. These results, though here presented in the context of RDF querying, apply similarly also in the more general setting of data exchange.

  10. Analyzing cross-college course enrollments via contextual graph mining

    PubMed Central

    Liu, Xiaozhong; Chen, Yan

    2017-01-01

    The ability to predict what courses a student may enroll in the coming semester plays a pivotal role in the allocation of learning resources, which is a hot topic in the domain of educational data mining. In this study, we propose an innovative approach to characterize students’ cross-college course enrollments by leveraging a novel contextual graph. Specifically, different kinds of variables, such as students, courses, colleges and diplomas, as well as various types of variable relations, are utilized to depict the context of each variable, and then a representation learning algorithm node2vec is applied to extracting sophisticated graph-based features for the enrollment analysis. In this manner, the relations between any pair of variables can be measured quantitatively, which enables the variable type to transform from nominal to ratio. These graph-based features are examined by the random forest algorithm, and experiments on 24,663 students, 1,674 courses and 417,590 enrollment records demonstrate that the contextual graph can successfully improve analyzing the cross-college course enrollments, where three of the graph-based features have significantly stronger impacts on prediction accuracy than the others. Besides, the empirical results also indicate that the student’s course preference is the most important factor in predicting future course enrollments, which is consistent to the previous studies that acknowledge the course interest is a key point for course recommendations. PMID:29186171

  11. Analyzing cross-college course enrollments via contextual graph mining.

    PubMed

    Wang, Yongzhen; Liu, Xiaozhong; Chen, Yan

    2017-01-01

    The ability to predict what courses a student may enroll in the coming semester plays a pivotal role in the allocation of learning resources, which is a hot topic in the domain of educational data mining. In this study, we propose an innovative approach to characterize students' cross-college course enrollments by leveraging a novel contextual graph. Specifically, different kinds of variables, such as students, courses, colleges and diplomas, as well as various types of variable relations, are utilized to depict the context of each variable, and then a representation learning algorithm node2vec is applied to extracting sophisticated graph-based features for the enrollment analysis. In this manner, the relations between any pair of variables can be measured quantitatively, which enables the variable type to transform from nominal to ratio. These graph-based features are examined by the random forest algorithm, and experiments on 24,663 students, 1,674 courses and 417,590 enrollment records demonstrate that the contextual graph can successfully improve analyzing the cross-college course enrollments, where three of the graph-based features have significantly stronger impacts on prediction accuracy than the others. Besides, the empirical results also indicate that the student's course preference is the most important factor in predicting future course enrollments, which is consistent to the previous studies that acknowledge the course interest is a key point for course recommendations.

  12. Theory and operational rules for the discrete Hankel transform.

    PubMed

    Baddour, Natalie; Chouinard, Ugo

    2015-04-01

    Previous definitions of a discrete Hankel transform (DHT) have focused on methods to approximate the continuous Hankel integral transform. In this paper, we propose and evaluate the theory of a DHT that is shown to arise from a discretization scheme based on the theory of Fourier-Bessel expansions. The proposed transform also possesses requisite orthogonality properties which lead to invertibility of the transform. The standard set of shift, modulation, multiplication, and convolution rules are derived. In addition to the theory of the actual manipulated quantities which stand in their own right, this DHT can be used to approximate the continuous forward and inverse Hankel transform in the same manner that the discrete Fourier transform is known to be able to approximate the continuous Fourier transform.

  13. Automatic determination of fault effects on aircraft functionality

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1989-01-01

    The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.

  14. Querying graphs in protein-protein interactions networks using feedback vertex set.

    PubMed

    Blin, Guillaume; Sikora, Florian; Vialette, Stéphane

    2010-01-01

    Recent techniques increase rapidly the amount of our knowledge on interactions between proteins. The interpretation of these new information depends on our ability to retrieve known substructures in the data, the Protein-Protein Interactions (PPIs) networks. In an algorithmic point of view, it is an hard task since it often leads to NP-hard problems. To overcome this difficulty, many authors have provided tools for querying patterns with a restricted topology, i.e., paths or trees in PPI networks. Such restriction leads to the development of fixed parameter tractable (FPT) algorithms, which can be practicable for restricted sizes of queries. Unfortunately, Graph Homomorphism is a W[1]-hard problem, and hence, no FPT algorithm can be found when patterns are in the shape of general graphs. However, Dost et al. gave an algorithm (which is not implemented) to query graphs with a bounded treewidth in PPI networks (the treewidth of the query being involved in the time complexity). In this paper, we propose another algorithm for querying pattern in the shape of graphs, also based on dynamic programming and the color-coding technique. To transform graphs queries into trees without loss of informations, we use feedback vertex set coupled to a node duplication mechanism. Hence, our algorithm is FPT for querying graphs with a bounded size of their feedback vertex set. It gives an alternative to the treewidth parameter, which can be better or worst for a given query. We provide a python implementation which allows us to validate our implementation on real data. Especially, we retrieve some human queries in the shape of graphs into the fly PPI network.

  15. On Wiener polarity index of bicyclic networks.

    PubMed

    Ma, Jing; Shi, Yongtang; Wang, Zhen; Yue, Jun

    2016-01-11

    Complex networks are ubiquitous in biological, physical and social sciences. Network robustness research aims at finding a measure to quantify network robustness. A number of Wiener type indices have recently been incorporated as distance-based descriptors of complex networks. Wiener type indices are known to depend both on the network's number of nodes and topology. The Wiener polarity index is also related to the cluster coefficient of networks. In this paper, based on some graph transformations, we determine the sharp upper bound of the Wiener polarity index among all bicyclic networks. These bounds help to understand the underlying quantitative graph measures in depth.

  16. Ring system-based chemical graph generation for de novo molecular design

    NASA Astrophysics Data System (ADS)

    Miyao, Tomoyuki; Kaneko, Hiromasa; Funatsu, Kimito

    2016-05-01

    Generating chemical graphs in silico by combining building blocks is important and fundamental in virtual combinatorial chemistry. A premise in this area is that generated structures should be irredundant as well as exhaustive. In this study, we develop structure generation algorithms regarding combining ring systems as well as atom fragments. The proposed algorithms consist of three parts. First, chemical structures are generated through a canonical construction path. During structure generation, ring systems can be treated as reduced graphs having fewer vertices than those in the original ones. Second, diversified structures are generated by a simple rule-based generation algorithm. Third, the number of structures to be generated can be estimated with adequate accuracy without actual exhaustive generation. The proposed algorithms were implemented in structure generator Molgilla. As a practical application, Molgilla generated chemical structures mimicking rosiglitazone in terms of a two dimensional pharmacophore pattern. The strength of the algorithms lies in simplicity and flexibility. Therefore, they may be applied to various computer programs regarding structure generation by combining building blocks.

  17. A hierarchical graph neuron scheme for real-time pattern recognition.

    PubMed

    Nasution, B B; Khan, A I

    2008-02-01

    The hierarchical graph neuron (HGN) implements a single cycle memorization and recall operation through a novel algorithmic design. The HGN is an improvement on the already published original graph neuron (GN) algorithm. In this improved approach, it recognizes incomplete/noisy patterns. It also resolves the crosstalk problem, which is identified in the previous publications, within closely matched patterns. To accomplish this, the HGN links multiple GN networks for filtering noise and crosstalk out of pattern data inputs. Intrinsically, the HGN is a lightweight in-network processing algorithm which does not require expensive floating point computations; hence, it is very suitable for real-time applications and tiny devices such as the wireless sensor networks. This paper describes that the HGN's pattern matching capability and the small response time remain insensitive to the increases in the number of stored patterns. Moreover, the HGN does not require definition of rules or setting of thresholds by the operator to achieve the desired results nor does it require heuristics entailing iterative operations for memorization and recall of patterns.

  18. Integrating Graphing Assignments into a Money and Banking Course Using FRED

    ERIC Educational Resources Information Center

    Staveley-O'Carroll, James

    2018-01-01

    Over the course of one semester, six empirical assignments that utilize FRED are used to introduce students of money and banking courses to the economic analysis required for the conduct of monetary policy. The first five assignments cover the following topics: inflation, bonds and stocks, monetary aggregates, the Taylor rule, and employment.…

  19. A Wavelet Analysis Approach for Categorizing Air Traffic Behavior

    NASA Technical Reports Server (NTRS)

    Drew, Michael; Sheth, Kapil

    2015-01-01

    In this paper two frequency domain techniques are applied to air traffic analysis. The Continuous Wavelet Transform (CWT), like the Fourier Transform, is shown to identify changes in historical traffic patterns caused by Traffic Management Initiatives (TMIs) and weather with the added benefit of detecting when in time those changes take place. Next, with the expectation that it could detect anomalies in the network and indicate the extent to which they affect traffic flows, the Spectral Graph Wavelet Transform (SGWT) is applied to a center based graph model of air traffic. When applied to simulations based on historical flight plans, it identified the traffic flows between centers that have the greatest impact on either neighboring flows, or flows between centers many centers away. Like the CWT, however, it can be difficult to interpret SGWT results and relate them to simulations where major TMIs are implemented, and more research may be warranted in this area. These frequency analysis techniques can detect off-nominal air traffic behavior, but due to the nature of air traffic time series data, so far they prove difficult to apply in a way that provides significant insight or specific identification of traffic patterns.

  20. Primary Place. Math Projects That Count.

    ERIC Educational Resources Information Center

    Buschman, Larry; And Others

    1993-01-01

    Offers elementary math-centered recycling activities and ideas on transforming throwaways into valuable classroom resources. The math activities teach estimating, counting, measuring, weighing, graphing, patterning, thinking, comparing, proportion, and dimensions. The recycling ideas present ways to use pieces of trash to create educational games.…

  1. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.

  2. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    PubMed

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  3. A graph-based approach for the retrieval of multi-modality medical images.

    PubMed

    Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan

    2014-02-01

    In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Dynamics of tax evasion through an epidemic-like model

    NASA Astrophysics Data System (ADS)

    Brum, Rafael M.; Crokidakis, Nuno

    In this work, we study a model of tax evasion. We considered a fixed population divided in three compartments, namely honest tax payers, tax evaders and a third class between the mentioned two, which we call susceptibles to become evaders. The transitions among those compartments are ruled by probabilities, similarly to a model of epidemic spreading. These probabilities model social interactions among the individuals, as well as the government’s fiscalization. We simulate the model on fully-connected graphs, as well as on scale-free and random complex networks. For the fully-connected and random graph cases, we observe that the emergence of tax evaders in the population is associated with an active-absorbing nonequilibrium phase transition, that is absent in scale-free networks.

  5. Graph theoretical stable allocation as a tool for reproduction of control by human operators

    NASA Astrophysics Data System (ADS)

    van Nooijen, Ronald; Ertsen, Maurits; Kolechkina, Alla

    2016-04-01

    During the design of central control algorithms for existing water resource systems under manual control it is important to consider the interaction with parts of the system that remain under manual control and to compare the proposed new system with the existing manual methods. In graph theory the "stable allocation" problem has good solution algorithms and allows for formulation of flow distribution problems in terms of priorities. As a test case for the use of this approach we used the algorithm to derive water allocation rules for the Gezira Scheme, an irrigation system located between the Blue and White Niles south of Khartoum. In 1925, Gezira started with 300,000 acres; currently it covers close to two million acres.

  6. Directed differential connectivity graph of interictal epileptiform discharges

    PubMed Central

    Amini, Ladan; Jutten, Christian; Achard, Sophie; David, Olivier; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gh. Ali; Kahane, Philippe; Minotti, Lorella; Vercueil, Laurent

    2011-01-01

    In this paper, we study temporal couplings between interictal events of spatially remote regions in order to localize the leading epileptic regions from intracerebral electroencephalogram (iEEG). We aim to assess whether quantitative epileptic graph analysis during interictal period may be helpful to predict the seizure onset zone of ictal iEEG. Using wavelet transform, cross-correlation coefficient, and multiple hypothesis test, we propose a differential connectivity graph (DCG) to represent the connections that change significantly between epileptic and non-epileptic states as defined by the interictal events. Post-processings based on mutual information and multi-objective optimization are proposed to localize the leading epileptic regions through DCG. The suggested approach is applied on iEEG recordings of five patients suffering from focal epilepsy. Quantitative comparisons of the proposed epileptic regions within ictal onset zones detected by visual inspection and using electrically stimulated seizures, reveal good performance of the present method. PMID:21156385

  7. Improved segmentation of abnormal cervical nuclei using a graph-search based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Liu, Shaoxiong; Wang, Tianfu; Chen, Siping; Sonka, Milan

    2015-03-01

    Reliable segmentation of abnormal nuclei in cervical cytology is of paramount importance in automation-assisted screening techniques. This paper presents a general method for improving the segmentation of abnormal nuclei using a graph-search based approach. More specifically, the proposed method focuses on the improvement of coarse (initial) segmentation. The improvement relies on a transform that maps round-like border in the Cartesian coordinate system into lines in the polar coordinate system. The costs consisting of nucleus-specific edge and region information are assigned to the nodes. The globally optimal path in the constructed graph is then identified by dynamic programming. We have tested the proposed method on abnormal nuclei from two cervical cell image datasets, Herlev and H and E stained liquid-based cytology (HELBC), and the comparative experiments with recent state-of-the-art approaches demonstrate the superior performance of the proposed method.

  8. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  9. Fiber tracking of brain white matter based on graph theory.

    PubMed

    Lu, Meng

    2015-01-01

    Brain white matter tractography is reconstructed via diffusion-weighted magnetic resonance images. Due to the complex structure of brain white matter fiber bundles, fiber crossing and fiber branching are abundant in human brain. And regular methods with diffusion tensor imaging (DTI) can't accurately handle this problem. the biggest problems of the brain tractography. Therefore, this paper presented a novel brain white matter tractography method based on graph theory, so the fiber tracking between two voxels is transformed into locating the shortest path in a graph. Besides, the presented method uses Q-ball imaging (QBI) as the source data instead of DTI, because QBI can provide accurate information about multiple fiber crossing and branching in one voxel using orientation distribution function (ODF). Experiments showed that the presented method can accurately handle the problem of brain white matter fiber crossing and branching, and reconstruct brain tractograhpy both in phantom data and real brain data.

  10. Semiclassical theory of Landau levels and magnetic breakdown in topological metals

    NASA Astrophysics Data System (ADS)

    Alexandradinata, A.; Glazman, Leonid

    2018-04-01

    The Bohr-Sommerfeld quantization rule lies at the heart of the semiclassical theory of a Bloch electron in a magnetic field. This rule is predictive of Landau levels and de Haas-van Alphen oscillations for conventional metals, as well as for a host of topological metals which have emerged in the recent intercourse between band theory, crystalline symmetries, and topology. The essential ingredients in any quantization rule are connection formulas that match the semiclassical (WKB) wave function across regions of strong quantum fluctuations. Here, we propose (a) a multicomponent WKB wave function that describes transport within degenerate-band subspaces, and (b) the requisite connection formulas for saddle points and type-II Dirac points, where tunneling respectively occurs within the same band, and between distinct bands. (a) and (b) extend previous works by incorporating phase corrections that are subleading in powers of the field; these corrections include the geometric Berry phase, and account for the orbital magnetic moment and the Zeeman coupling. A comprehensive symmetry analysis is performed for such phase corrections occurring in closed orbits, which is applicable to solids in any (magnetic) space group. We have further formulated a graph-theoretic description of semiclassical orbits. This allows us to systematize the construction of quantization rules for a large class of closed orbits (with or without tunneling), as well as to formulate the notion of a topological invariant in semiclassical magnetotransport—as a quantity that is invariant under continuous deformations of the graph. Landau levels in the presence of tunneling are generically quasirandom, i.e., disordered on the scale of nearest-neighbor level spacings but having longer-ranged correlations; we develop a perturbative theory to determine Landau levels in such quasirandom spectra.

  11. Author Correction: Time-dependent memory transformation along the hippocampal anterior-posterior axis.

    PubMed

    Dandolo, Lisa C; Schwabe, Lars

    2018-05-24

    In the originally published version of this Article, the rightmost graph in Fig. 2c was inadvertently replaced with a duplicate of the central panel. This has now been corrected in both the PDF and HTML versions of the Article.

  12. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  13. Entanglement and nonclassical properties of hypergraph states

    NASA Astrophysics Data System (ADS)

    Gühne, Otfried; Cuquet, Martí; Steinhoff, Frank E. S.; Moroder, Tobias; Rossi, Matteo; Bruß, Dagmar; Kraus, Barbara; Macchiavello, Chiara

    2014-08-01

    Hypergraph states are multiqubit states that form a subset of the locally maximally entangleable states and a generalization of the well-established notion of graph states. Mathematically, they can conveniently be described by a hypergraph that indicates a possible generation procedure of these states; alternatively, they can also be phrased in terms of a nonlocal stabilizer formalism. In this paper, we explore the entanglement properties and nonclassical features of hypergraph states. First, we identify the equivalence classes under local unitary transformations for up to four qubits, as well as important classes of five- and six-qubit states, and determine various entanglement properties of these classes. Second, we present general conditions under which the local unitary equivalence of hypergraph states can simply be decided by considering a finite set of transformations with a clear graph-theoretical interpretation. Finally, we consider the question of whether hypergraph states and their correlations can be used to reveal contradictions with classical hidden-variable theories. We demonstrate that various noncontextuality inequalities and Bell inequalities can be derived for hypergraph states.

  14. Impact of generalized Fourier's and Fick's laws on MHD 3D second grade nanofluid flow with variable thermal conductivity and convective heat and mass conditions

    NASA Astrophysics Data System (ADS)

    Ramzan, M.; Bilal, M.; Chung, Jae Dong; Lu, Dian Chen; Farooq, Umer

    2017-09-01

    A mathematical model has been established to study the magnetohydrodynamic second grade nanofluid flow past a bidirectional stretched surface. The flow is induced by Cattaneo-Christov thermal and concentration diffusion fluxes. Novel characteristics of Brownian motion and thermophoresis are accompanied by temperature dependent thermal conductivity and convective heat and mass boundary conditions. Apposite transformations are betrothed to transform a system of nonlinear partial differential equations to nonlinear ordinary differential equations. Analytic solutions of the obtained nonlinear system are obtained via a convergent method. Graphs are plotted to examine how velocity, temperature, and concentration distributions are affected by varied physical involved parameters. Effects of skin friction coefficients along the x- and y-direction versus various parameters are also shown through graphs and are well debated. Our findings show that velocities along both the x and y axes exhibit a decreasing trend for the Hartmann number. Moreover, temperature and concentration distributions are decreasing functions of thermal and concentration relaxation parameters.

  15. Loop series for discrete statistical models on graphs

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael; Chernyak, Vladimir Y.

    2006-06-01

    In this paper we present the derivation details, logic, and motivation for the three loop calculus introduced in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Generating functions for each of the three interrelated discrete statistical models are expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls belief-propagation (BP) contribution; the other terms are labelled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complementary approach, briefly explained in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Local gauge symmetry transformations that clarify an important invariant feature of the BP solution are revealed in both approaches. The individual terms change under the gauge transformation while the partition function remains invariant. The requirement for all individual terms to be nonzero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.

  16. Adaptive graph-based multiple testing procedures

    PubMed Central

    Klinglmueller, Florian; Posch, Martin; Koenig, Franz

    2016-01-01

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733

  17. Random Evolution of Idiotypic Networks: Dynamics and Architecture

    NASA Astrophysics Data System (ADS)

    Brede, Markus; Behn, Ulrich

    The paper deals with modelling a subsystem of the immune system, the so-called idiotypic network (INW). INWs, conceived by N.K. Jerne in 1974, are functional networks of interacting antibodies and B cells. In principle, Jernes' framework provides solutions to many issues in immunology, such as immunological memory, mechanisms for antigen recognition and self/non-self discrimination. Explaining the interconnection between the elementary components, local dynamics, network formation and architecture, and possible modes of global system function appears to be an ideal playground of statistical mechanics. We present a simple cellular automaton model, based on a graph representation of the system. From a simplified description of idiotypic interactions, rules for the random evolution of networks of occupied and empty sites on these graphs are derived. In certain biologically relevant parameter ranges the resultant dynamics leads to stationary states. A stationary state is found to correspond to a specific pattern of network organization. It turns out that even these very simple rules give rise to a multitude of different kinds of patterns. We characterize these networks by classifying `static' and `dynamic' network-patterns. A type of `dynamic' network is found to display many features of real INWs.

  18. Bladder segmentation in MR images with watershed segmentation and graph cut algorithm

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Renisch, Steffen; Schadewaldt, Nicole; Schulz, Heinrich; Wiemker, Rafael

    2014-03-01

    Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.

  19. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-29

    ...The Department of Homeland Security (DHS) is amending its regulations to enable U.S. Citizenship and Immigration Services (USCIS) to migrate from a paper file-based, non-integrated systems environment to an electronic customer-focused, centralized case management environment for benefit processing. This transformation process will allow USCIS to streamline benefit processing, eliminate the capture and processing of redundant data, and reduce the number of and automate its forms. This transformation process will be a phased multi-year initiative to restructure USCIS business processes and related information technology systems. DHS is removing references to form numbers, form titles, expired regulatory provisions, and descriptions of internal procedures, many of which will change during transformation. DHS is also finalizing interim rules that permitted submission of benefit requests with an electronic signature when such requests are submitted in an electronic format rather than on a paper form and that removed references to filing locations for immigration benefits. In addition, in this rule DHS is publishing the final rule for six other interim rules published during the past several years, most of which received no public comments.

  20. Pyramid algorithms as models of human cognition

    NASA Astrophysics Data System (ADS)

    Pizlo, Zygmunt; Li, Zheng

    2003-06-01

    There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.

  1. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  2. Abstract Interpreters for Free

    NASA Astrophysics Data System (ADS)

    Might, Matthew

    In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.

  3. One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.

    PubMed

    Biswas, Sujoy Kumar; Milanfar, Peyman

    2016-03-01

    One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.

  4. UV Spectrophotometric Simultaneous Determination of Paracetamol and Ibuprofen in Combined Tablets by Derivative and Wavelet Transforms

    PubMed Central

    Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Minh Thi Nguyen, Hue

    2014-01-01

    The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12–32 mg/L) and paracetamol (20–40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy. PMID:24949492

  5. UV spectrophotometric simultaneous determination of paracetamol and ibuprofen in combined tablets by derivative and wavelet transforms.

    PubMed

    Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Nguyen, Hue Minh Thi

    2014-01-01

    The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12-32 mg/L) and paracetamol (20-40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy.

  6. A Measure for the Cohesion of Weighted Networks.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    2003-01-01

    Discusses graph theory in information science, focusing on measures for the cohesion of networks. Illustrates how a set of weights between connected nodes can be transformed into a set of dissimilarity measures and presents an example of the new compactness measures for a cocitation and a bibliographic coupling network. (Author/LRW)

  7. Conformity hinders the evolution of cooperation on scale-free networks

    NASA Astrophysics Data System (ADS)

    Peña, Jorge; Volken, Henri; Pestelacci, Enea; Tomassini, Marco

    2009-07-01

    We study the effects of conformity, the tendency of humans to imitate locally common behaviors, in the evolution of cooperation when individuals occupy the vertices of a graph and engage in the one-shot prisoner’s dilemma or the snowdrift game with their neighbors. Two different graphs are studied: rings (one-dimensional lattices with cyclic boundary conditions) and scale-free networks of the Barabási-Albert type. The proposed evolutionary-graph model is studied both by means of Monte Carlo simulations and an extended pair-approximation technique. We find improved levels of cooperation when evolution is carried on rings and individuals imitate according to both the traditional payoff bias and a conformist bias. More importantly, we show that scale-free networks are no longer powerful amplifiers of cooperation when fair amounts of conformity are introduced in the imitation rules of the players. Such weakening of the cooperation-promoting abilities of scale-free networks is the result of a less biased flow of information in scale-free topologies, making hubs more susceptible of being influenced by less-connected neighbors.

  8. Spectral mapping of brain functional connectivity from diffusion imaging.

    PubMed

    Becker, Cassiano O; Pequito, Sérgio; Pappas, George J; Miller, Michael B; Grafton, Scott T; Bassett, Danielle S; Preciado, Victor M

    2018-01-23

    Understanding the relationship between the dynamics of neural processes and the anatomical substrate of the brain is a central question in neuroscience. On the one hand, modern neuroimaging technologies, such as diffusion tensor imaging, can be used to construct structural graphs representing the architecture of white matter streamlines linking cortical and subcortical structures. On the other hand, temporal patterns of neural activity can be used to construct functional graphs representing temporal correlations between brain regions. Although some studies provide evidence that whole-brain functional connectivity is shaped by the underlying anatomy, the observed relationship between function and structure is weak, and the rules by which anatomy constrains brain dynamics remain elusive. In this article, we introduce a methodology to map the functional connectivity of a subject at rest from his or her structural graph. Using our methodology, we are able to systematically account for the role of structural walks in the formation of functional correlations. Furthermore, in our empirical evaluations, we observe that the eigenmodes of the mapped functional connectivity are associated with activity patterns associated with different cognitive systems.

  9. Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks

    PubMed Central

    Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.

    2009-01-01

    We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068

  10. Synthesis of Polyferrocenylsilane Block Copolymers and their Crystallization-Driven Self-Assembly in Protic Solvents

    NASA Astrophysics Data System (ADS)

    Zhou, Hang

    Quantum walks are the quantum mechanical analogue of classical random walks. Discrete-time quantum walks have been introduced and studied mostly on the line Z or higher dimensional space Zd but rarely defined on graphs with fractal dimensions because the coin operator depends on the position and the Fourier transform on the fractals is not defined. Inspired by its nature of classical walks, different quantum walks will be defined by choosing different shift and coin operators. When the coin operator is uniform, the results of classical walks will be obtained upon measurement at each step. Moreover, with measurement at each step, our results reveal more information about the classical random walks. In this dissertation, two graphs with fractal dimensions will be considered. The first one is Sierpinski gasket, a degree-4 regular graph with Hausdorff dimension of df = ln 3/ ln 2. The second is the Cantor graph derived like Cantor set, with Hausdorff dimension of df = ln 2/ ln 3. The definitions and amplitude functions of the quantum walks will be introduced. The main part of this dissertation is to derive a recursive formula to compute the amplitude Green function. The exiting probability will be computed and compared with the classical results. When the generation of graphs goes to infinity, the recursion of the walks will be investigated and the convergence rates will be obtained and compared with the classical counterparts.

  11. Scale-free networks as an epiphenomenon of memory

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Hamma, A.; Di Ventra, M.

    2015-01-01

    Many realistic networks are scale free, with small characteristic path lengths, high clustering, and power law in their degree distribution. They can be obtained by dynamical networks in which a preferential attachment process takes place. However, this mechanism is non-local, in the sense that it requires knowledge of the whole graph in order for the graph to be updated. Instead, if preferential attachment and realistic networks occur in physical systems, these features need to emerge from a local model. In this paper, we propose a local model and show that a possible ingredient (which is often underrated) for obtaining scale-free networks with local rules is memory. Such a model can be realised in solid-state circuits, using non-linear passive elements with memory such as memristors, and thus can be tested experimentally.

  12. Method of optimum channel switching in equipment of infocommunication network in conditions of cyber attacks to their telecommunication infrastructure.

    NASA Astrophysics Data System (ADS)

    Kochedykov, S. S.; Noev, A. N.; Dushkin, A. V.; Gubin, I. A.

    2018-05-01

    On the basis of the mathematical graph theory, the method of optimum switching of infocommunication networks in the conditions of cyber attacks is developed. The idea of representation of a set of possible ways on the graph in the form of the multilevel tree ordered by rules of algebra of a logic theory is the cornerstone of a method. As a criterion of optimization, the maximum of network transmission capacity to which assessment Ford- Falkerson's theorem is applied is used. The method is realized in the form of a numerical algorithm, which can be used not only for design, but also for operational management of infocommunication networks in conditions of violation of the functioning of their switching centers.

  13. Expert System Approach For Generating And Evaluating Engine Design Alternatives

    NASA Astrophysics Data System (ADS)

    Shen, Stewart N. T.; Chew, Meng-Sang; Issa, Ghassan F.

    1989-03-01

    Artificial intelligence is becoming an increasingly important subject of study for computer scientists, engineering designers, as well as professionals in other fields. Even though AI technology is a relatively new discipline, many of its concepts have already found practical applications. Expert systems, in particular, have made significant contributions to technologies in such fields as business, medicine, engineering design, chemistry, and particle physics. This paper describes an expert system developed to aid the mechanical designer with the preliminary design of variable-stroke internal-combustion engines. The expert system accomplished its task by generating and evaluating a large number of design alternatives represented in the form of graphs. Through the application of structural and design rules directly to the graphs, optimal and near optimal preliminary design configurations of engines are deduced.

  14. Estimation of the size of drug-like chemical space based on GDB-17 data.

    PubMed

    Polishchuk, P G; Madzhidov, T I; Varnek, A

    2013-08-01

    The goal of this paper is to estimate the number of realistic drug-like molecules which could ever be synthesized. Unlike previous studies based on exhaustive enumeration of molecular graphs or on combinatorial enumeration preselected fragments, we used results of constrained graphs enumeration by Reymond to establish a correlation between the number of generated structures (M) and the number of heavy atoms (N): logM = 0.584 × N × logN + 0.356. The number of atoms limiting drug-like chemical space of molecules which follow Lipinsky's rules (N = 36) has been obtained from the analysis of the PubChem database. This results in M ≈ 10³³ which is in between the numbers estimated by Ertl (10²³) and by Bohacek (10⁶⁰).

  15. Enumeration of Ring–Chain Tautomers Based on SMIRKS Rules

    PubMed Central

    2015-01-01

    A compound exhibits (prototropic) tautomerism if it can be represented by two or more structures that are related by a formal intramolecular movement of a hydrogen atom from one heavy atom position to another. When the movement of the proton is accompanied by the opening or closing of a ring it is called ring–chain tautomerism. This type of tautomerism is well observed in carbohydrates, but it also occurs in other molecules such as warfarin. In this work, we present an approach that allows for the generation of all ring–chain tautomers of a given chemical structure. Based on Baldwin’s Rules estimating the likelihood of ring closure reactions to occur, we have defined a set of transform rules covering the majority of ring–chain tautomerism cases. The rules automatically detect substructures in a given compound that can undergo a ring–chain tautomeric transformation. Each transformation is encoded in SMIRKS line notation. All work was implemented in the chemoinformatics toolkit CACTVS. We report on the application of our ring–chain tautomerism rules to a large database of commercially available screening samples in order to identify ring–chain tautomers. PMID:25158156

  16. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    PubMed

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A Multilevel Gamma-Clustering Layout Algorithm for Visualization of Biological Networks

    PubMed Central

    Hruz, Tomas; Lucas, Christoph; Laule, Oliver; Zimmermann, Philip

    2013-01-01

    Visualization of large complex networks has become an indispensable part of systems biology, where organisms need to be considered as one complex system. The visualization of the corresponding network is challenging due to the size and density of edges. In many cases, the use of standard visualization algorithms can lead to high running times and poorly readable visualizations due to many edge crossings. We suggest an approach that analyzes the structure of the graph first and then generates a new graph which contains specific semantic symbols for regular substructures like dense clusters. We propose a multilevel gamma-clustering layout visualization algorithm (MLGA) which proceeds in three subsequent steps: (i) a multilevel γ-clustering is used to identify the structure of the underlying network, (ii) the network is transformed to a tree, and (iii) finally, the resulting tree which shows the network structure is drawn using a variation of a force-directed algorithm. The algorithm has a potential to visualize very large networks because it uses modern clustering heuristics which are optimized for large graphs. Moreover, most of the edges are removed from the visual representation which allows keeping the overview over complex graphs with dense subgraphs. PMID:23864855

  18. Ostracod body size trends do not follow either Bergmann's rule or Cope's rule during periods of constant temperature increase

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Seshadri, P.; Amin, V.; Heim, N. A.; Payne, J.

    2013-12-01

    Over time, organisms have adapted to changing environments by evolving to be larger or smaller. Scientists have described body-size trends using two generalized theories. Bergmann's rule states that body size is inversely related to temperature, and Cope's rule establishes an increase over time. Cope's rule has been hypothesized as a temporal manifestation of Bergmann's rule, as the temperature of the Earth has consistently decreased over time and mean body size has increased. However, during times of constant temperature increase, Bergmann's rule and Cope's rule predict opposite effects on body size. Our goal was to clarify this relationship using both accessible proxies of historic temperature - atmospheric CO2 levels and paleo-latitude. We measured ostracod lengths throughout the Paleozoic and Mesozoic eras (using the Catalogue of Ostracoda) and utilized ostracod latitudinal information from the Paleobiology Database. By closely studying body-size trends during four time periods of constant CO2 increase across spectrums of time and latitude, we were able to compare the effects of Cope's and Bergmann's rule. The correlation, p-values, and slopes of each of our graphs showed that there is no clear relationship between body size and each of these rules in times of temperature increase, both latitudinally and temporally. Therefore, both Cope's and Bergmann's rule act on marine ostracods and no rule is dominant, though our results more strongly disprove the latitudinal variation in ostracod size.

  19. Modernization and multiscale databases at the U.S. geological survey

    USGS Publications Warehouse

    Morrison, J.L.

    1992-01-01

    The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.

  20. The New Rule Paradigm Shift: Transforming At-Risk Programs by Matching Business Archetypes Strategies in the Global Market

    ERIC Educational Resources Information Center

    Stark, Paul S.

    2007-01-01

    The challenge was given to transform aviation-related programs to keep them from being eliminated. These programs were to be discontinued due to enrollment declines, costs, legislative mandates, lack of administrative support, and drastic state budget reductions. The New Rule was a paradigm shift of focus to the global market for program…

  1. Confidence of compliance: a Bayesian approach for percentile standards.

    PubMed

    McBride, G B; Ellis, J C

    2001-04-01

    Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.

  2. The Fragment Network: A Chemistry Recommendation Engine Built Using a Graph Database.

    PubMed

    Hall, Richard J; Murray, Christopher W; Verdonk, Marcel L

    2017-07-27

    The hit validation stage of a fragment-based drug discovery campaign involves probing the SAR around one or more fragment hits. This often requires a search for similar compounds in a corporate collection or from commercial suppliers. The Fragment Network is a graph database that allows a user to efficiently search chemical space around a compound of interest. The result set is chemically intuitive, naturally grouped by substitution pattern and meaningfully sorted according to the number of observations of each transformation in medicinal chemistry databases. This paper describes the algorithms used to construct and search the Fragment Network and provides examples of how it may be used in a drug discovery context.

  3. Aspiration dynamics in structured population acts as if in a well-mixed one.

    PubMed

    Du, Jinming; Wu, Bin; Wang, Long

    2015-01-26

    Understanding the evolution of human interactive behaviors is important. Recent experimental results suggest that human cooperation in spatial structured population is not enhanced as predicted in previous works, when payoff-dependent imitation updating rules are used. This constraint opens up an avenue to shed light on how humans update their strategies in real life. Studies via simulations show that, instead of comparison rules, self-evaluation driven updating rules may explain why spatial structure does not alter the evolutionary outcome. Though inspiring, there is a lack of theoretical result to show the existence of such evolutionary updating rule. Here we study the aspiration dynamics, and show that it does not alter the evolutionary outcome in various population structures. Under weak selection, by analytical approximation, we find that the favored strategy in regular graphs is invariant. Further, we show that this is because the criterion under which a strategy is favored is the same as that of a well-mixed population. By simulation, we show that this holds for random networks. Although how humans update their strategies is an open question to be studied, our results provide a theoretical foundation of the updating rules that may capture the real human updating rules.

  4. Counting surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group with motivations from string theory and QFT

    NASA Astrophysics Data System (ADS)

    Bibak, Khodakhast; Kapron, Bruce M.; Srinivasan, Venkatesh

    2016-09-01

    Graphs embedded into surfaces have many important applications, in particular, in combinatorics, geometry, and physics. For example, ribbon graphs and their counting is of great interest in string theory and quantum field theory (QFT). Recently, Koch et al. (2013) [12] gave a refined formula for counting ribbon graphs and discussed its applications to several physics problems. An important factor in this formula is the number of surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group. The aim of this paper is to give an explicit and practical formula for the number of such epimorphisms. As a consequence, we obtain an 'equivalent' form of Harvey's famous theorem on the cyclic groups of automorphisms of compact Riemann surfaces. Our main tool is an explicit formula for the number of solutions of restricted linear congruence recently proved by Bibak et al. using properties of Ramanujan sums and of the finite Fourier transform of arithmetic functions.

  5. A Nonparametric Framework for Comparing Trends and Gaps across Tests

    ERIC Educational Resources Information Center

    Ho, Andrew Dean

    2009-01-01

    Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then…

  6. Kinetics of austenite-pearlite transformation in eutectoid carbon steel

    NASA Astrophysics Data System (ADS)

    Hawbolt, E. B.; Chau, B.; Brimacombe, J. K.

    1983-09-01

    The kinetics of the austenite-to-pearlite transformation have been measured under isothermal and continuous-cooling conditions on a eutectoid carbon (1080) steel using a diametral dilatometric technique. The isothermal transformation kinetics have been analyzed in terms of the Avrami Equation containing the two parameters n and b; the initiation of transformation was characterized by an empirically determined transformation-start time (tAv). The parameter n was found to be nearly constant; and neither n nor b was dependent on the cooling rate between T A1 and the test temperature. Continuous-cooling tests were performed with cooling rates ranging from 7.5 to 108 °C per second, and the initiation of transformation was determined. Comparison of this transformation-start time for different cooling rates with the measured slow cooling of a test coupon immersed in a salt bath indicates that, particularly at lower temperatures, the transformation in the traditional T-T-T test specimen may not be isothermal. The additivity rule was found to predict accurately the time taken, relative to tAv, to reach a given fraction of austenite transformed, even though there is some question that the isokinetic condition was met above 660 °C. However, the additivity rule does not hold for the pretransformation or incubation period, as originally proposed by Scheil, and seriously overestimates the incubation time. Application of the additivity rule to the prediction of transformation-finish time, based on transformation start at TA1, also leads to overestimates, but these are less serious. The isothermal parameters— n ( T), b ( T), and tAv ( T)—have been used to predict continuous-cooling transformation kinetics which are in close agreement with measurements at four cooling rates ranging from 7.5 to 64 °C per second.

  7. Target-Based Maintenance of Privacy Preserving Association Rules

    ERIC Educational Resources Information Center

    Ahluwalia, Madhu V.

    2011-01-01

    In the context of association rule mining, the state-of-the-art in privacy preserving data mining provides solutions for categorical and Boolean association rules but not for quantitative association rules. This research fills this gap by describing a method based on discrete wavelet transform (DWT) to protect input data privacy while preserving…

  8. Rotating flow of a nanofluid due to an exponentially stretching surface with suction

    NASA Astrophysics Data System (ADS)

    Salleh, Siti Nur Alwani; Bachok, Norfifah; Arifin, Norihan Md

    2017-08-01

    An analysis of the rotating nanofluid flow past an exponentially stretched surface with the presence of suction is studied in this work. Three different types of nanoparticles, namely, copper, titania and alumina are considered. The system of ordinary differential equations is computed numerically using a shooting method in Maple software after being transformed from the partial differential equations. This transformation has considered the similarity transformations in exponential form. The physical effect of the rotation, suction and nanoparticle volume fraction parameters on the rotating flow and heat transfer phenomena is investigated and has been described in detail through graphs. The dual solutions are found to appear when the governing parameters reach a certain range.

  9. Transformation rules and degradation of CAHs by Fentonlike oxidation in growth ring of water distribution network-A review

    NASA Astrophysics Data System (ADS)

    Zhong, D.; Ma, W. C.; Jiang, X. Q.; Yuan, Y. X.; Yuan, Y.; Wang, Z. Q.; Fang, T. T.; Huang, W. Y.

    2017-08-01

    Chlorinated hydrocarbons are widely used as organic solvent and chemical raw materials. After treatment, water polluted with trichloroethylene (TCE)/tetrachloroethylene (PCE) can reach the water quality requirements, while water with trace amounts of TCE/PCE is still harmful to humans, which will cause cancers. Water distribution network is an extremely complicated system, in which adsorption, desorption, flocculation, movement, transformation and reduction will occur, leading to changes of TCE/PCE concentrations and products. Therefore, it is important to investigate the transformation rules of TCE/PCE in water distribution network. What’s more, growth-ring, including drinking water pipes deposits, can act as catalysts in Fenton-like reagent (H2O2). This review summarizes the status of transformation rules of CAHs in water distribution network. It also evaluates the effectiveness and fruit of CAHs degradation by Fenton-like reagent based on growth-ring. This review is important in solving the potential safety problems caused by TCE/PCE in water distribution network.

  10. ER2OWL: Generating OWL Ontology from ER Diagram

    NASA Astrophysics Data System (ADS)

    Fahad, Muhammad

    Ontology is the fundamental part of Semantic Web. The goal of W3C is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. The set of rules for transformation is tested on a structured analysis and design example. The framework provides OWL ontology for semantic web fundamental. This framework helps software engineers in upgrading the structured analysis and design artifact ERD, to components of semantic web. Moreover our transformation tool, ER2OWL, reduces the cost and time for building OWL ontologies with the reuse of existing entity relationship models.

  11. An Investigation and Interpretation of Selected Topics in Uncertainty Reasoning

    DTIC Science & Technology

    1989-12-01

    Characterizing seconditry uncertainty as spurious evidence and including it in the inference process , It was shown that probability ratio graphs are a...in the inference process has great impact on the computational complexity of an Inference process . viii An Investigation and Interpretation of...Systems," he outlines a five step process that incorporates Blyeslan reasoning in the development of the expert system rule base: 1. A group of

  12. Optimal response to attacks on the open science grids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunay, M.; Leyffer, S.; Linderoth, J. T.

    2011-01-01

    Cybersecurity is a growing concern, especially in open grids, where attack propagation is easy because of prevalent collaborations among thousands of users and hundreds of institutions. The collaboration rules that typically govern large science experiments as well as social networks of scientists span across the institutional security boundaries. A common concern is that the increased openness may allow malicious attackers to spread more readily around the grid. We consider how to optimally respond to attacks in open grid environments. To show how and why attacks spread more readily around the grid, we first discuss how collaborations manifest themselves in themore » grids and form the collaboration network graph, and how this collaboration network graph affects the security threat levels of grid participants. We present two mixed-integer program (MIP) models to find the optimal response to attacks in open grid environments, and also calculate the threat level associated with each grid participant. Given an attack scenario, our optimal response model aims to minimize the threat levels at unaffected participants while maximizing the uninterrupted scientific production (continuing collaborations). By adopting some of the collaboration rules (e.g., suspending a collaboration or shutting down a site), the model finds optimal response to subvert an attack scenario.« less

  13. Numerical calculation of the Fresnel transform.

    PubMed

    Kelly, Damien P

    2014-04-01

    In this paper, we address the problem of calculating Fresnel diffraction integrals using a finite number of uniformly spaced samples. General and simple sampling rules of thumb are derived that allow the user to calculate the distribution for any propagation distance. It is shown how these rules can be extended to fast-Fourier-transform-based algorithms to increase calculation efficiency. A comparison with other theoretical approaches is made.

  14. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  15. An AI-based communication system for motor and speech disabled persons: design methodology and prototype testing.

    PubMed

    Sy, B K; Deller, J R

    1989-05-01

    An intelligent communication device is developed to assist the nonverbal, motor disabled in the generation of written and spoken messages. The device is centered on a knowledge base of the grammatical rules and message elements. A "belief" reasoning scheme based on both the information from external sources and the embedded knowledge is used to optimize the process of message search. The search for the message elements is conceptualized as a path search in the language graph, and a special frame architecture is used to construct and to partition the graph. Bayesian "belief" reasoning from the Dempster-Shafer theory of evidence is augmented to cope with time-varying evidence. An "information fusion" strategy is also introduced to integrate various forms of external information. Experimental testing of the prototype system is discussed.

  16. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  17. A method for validating Rent's rule for technological and biological networks.

    PubMed

    Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro

    2017-07-14

    Rent's rule is empirical power law introduced in an effort to describe and optimize the wiring complexity of computer logic graphs. It is known that brain and neuronal networks also obey Rent's rule, which is consistent with the idea that wiring costs play a fundamental role in brain evolution and development. Here we propose a method to validate this power law for a certain range of network partitions. This method is based on the bifurcation phenomenon that appears when the network is subjected to random alterations preserving its degree distribution. It has been tested on a set of VLSI circuits and real networks, including biological and technological ones. We also analyzed the effect of different types of random alterations on the Rentian scaling in order to test the influence of the degree distribution. There are network architectures quite sensitive to these randomization procedures with significant increases in the values of the Rent exponents.

  18. LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly.

    PubMed

    Bonizzoni, Paola; Vedova, Gianluca Della; Pirola, Yuri; Previtali, Marco; Rizzi, Raffaella

    2016-03-01

    The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows-Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012 ), a state-of-the-art string graph-based assembler, and uses BEETL for indexing the input data. LSG is open source software and is available online. We have analyzed our implementation on a 875-million read whole-genome dataset, on which LSG has built the string graph using only 1GB of main memory (reducing the memory occupation by a factor of 50 with respect to SGA), while requiring slightly more than twice the time than SGA. The analysis of the entire pipeline shows an important decrease in memory usage, while managing to have only a moderate increase in the running time.

  19. Eigenvector synchronization, graph rigidity and the molecule problemR

    PubMed Central

    Cucuringu, Mihai; Singer, Amit; Cowburn, David

    2013-01-01

    The graph realization problem has received a great deal of attention in recent years, due to its importance in applications such as wireless sensor networks and structural biology. In this paper, we extend the previous work and propose the 3D-As-Synchronized-As-Possible (3D-ASAP) algorithm, for the graph realization problem in ℝ3, given a sparse and noisy set of distance measurements. 3D-ASAP is a divide and conquer, non-incremental and non-iterative algorithm, which integrates local distance information into a global structure determination. Our approach starts with identifying, for every node, a subgraph of its 1-hop neighborhood graph, which can be accurately embedded in its own coordinate system. In the noise-free case, the computed coordinates of the sensors in each patch must agree with their global positioning up to some unknown rigid motion, that is, up to translation, rotation and possibly reflection. In other words, to every patch, there corresponds an element of the Euclidean group, Euc(3), of rigid transformations in ℝ3, and the goal was to estimate the group elements that will properly align all the patches in a globally consistent way. Furthermore, 3D-ASAP successfully incorporates information specific to the molecule problem in structural biology, in particular information on known substructures and their orientation. In addition, we also propose 3D-spectral-partitioning (SP)-ASAP, a faster version of 3D-ASAP, which uses a spectral partitioning algorithm as a pre-processing step for dividing the initial graph into smaller subgraphs. Our extensive numerical simulations show that 3D-ASAP and 3D-SP-ASAP are very robust to high levels of noise in the measured distances and to sparse connectivity in the measurement graph, and compare favorably with similar state-of-the-art localization algorithms. PMID:24432187

  20. A Functional Analytic Approach To Computer-Interactive Mathematics

    PubMed Central

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed. PMID:15898471

  1. A functional analytic approach to computer-interactive mathematics.

    PubMed

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M; Ninness, Sharon K

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed.

  2. Topological visual mapping in robotics.

    PubMed

    Romero, Anna; Cazorla, Miguel

    2012-08-01

    A key problem in robotics is the construction of a map from its environment. This map could be used in different tasks, like localization, recognition, obstacle avoidance, etc. Besides, the simultaneous location and mapping (SLAM) problem has had a lot of interest in the robotics community. This paper presents a new method for visual mapping, using topological instead of metric information. For that purpose, we propose prior image segmentation into regions in order to group the extracted invariant features in a graph so that each graph defines a single region of the image. Although others methods have been proposed for visual SLAM, our method is complete, in the sense that it makes all the process: it presents a new method for image matching; it defines a way to build the topological map; and it also defines a matching criterion for loop-closing. The matching process will take into account visual features and their structure using the graph transformation matching (GTM) algorithm, which allows us to process the matching and to remove out the outliers. Then, using this image comparison method, we propose an algorithm for constructing topological maps. During the experimentation phase, we will test the robustness of the method and its ability constructing topological maps. We have also introduced new hysteresis behavior in order to solve some problems found building the graph.

  3. Graphs to estimate an individualized risk of breast cancer.

    PubMed

    Benichou, J; Gail, M H; Mulvihill, J J

    1996-01-01

    Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.

  4. Abstract rule learning in 11- and 14-month-old infants.

    PubMed

    Koulaguina, Elena; Shi, Rushen

    2013-02-01

    This study tests the hypothesis that distributional information can guide infants in the generalization of word order movement rules at the initial stage of language acquisition. Participants were 11- and 14-month-old infants. Stimuli were sentences in Russian, a language that was unknown to our infants. During training the word order of each sentence was transformed following a consistent pattern (e.g., ABC-BAC). During the test phase infants heard novel sentences that respected the trained rule and ones that violated the trained rule (i.e., a different transformation such as ABC-ACB). Stimuli words had highly variable phonological and morphological shapes. The cue available was the positional information of words and their non-adjacent relations across sentences. We found that 14-month-olds, but not 11-month-olds, showed evidence of abstract rule generalization to novel instances. The implications of this finding to early syntactic acquisition are discussed.

  5. SIGNATURES OF ILLICIT NUCLEAR PROCUREMENT NETWORKS: AN OVERVIEW OF PRELIMINARY APPROACHES AND RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, Jennifer B.; Erikson, Luke E.; Gastelum, Zoe N.

    2014-05-12

    The illicit trafficking of strategic nuclear commodities (defined here as the goods needed for a covert nuclear program excluding special nuclear materials) poses a significant challenge to the international nuclear nonproliferation community. Export control regulations, both domestically and internationally, seek to inhibit the spread of strategic nuclear commodities by restricting their sale to parties that may use them for nefarious purposes. However, export controls alone are not sufficient for preventing the illicit transfer of strategic nuclear goods. There are two major pitfalls to relying solely on export control regulations for the deterrence of proliferation of strategic goods. First, export controlmore » enforcement today relies heavily on the honesty and willingness of participants to adhere to the legal framework already in place. Secondly, current practices focus on the evaluation of single records which allow for the necessary goods to be purchased separately and hidden within the thousands of legitimate commerce transactions that occur each day, disregarding strategic information regarding several purchases. Our research presents two preliminary data-centric approaches for investigating procurement networks of strategic nuclear commodities. Pacific Northwest National Laboratory (PNNL) has been putting significant effort into nonproliferation activities as an institution, both in terms of the classical nuclear material focused approach and in the examination of other strategic goods necessary to implement a nuclear program. In particular, the PNNL Signature Discovery Initiative (SDI) has codified several scientific methodologies for the detection, characterization, and prediction of signatures that are indicative of a phenomenon of interest. The methodologies and tools developed under SDI have already been applied successfully to problems in bio-forensics, cyber security and power grid balancing efforts and they have now made the nonproliferation of strategic goods into a challenge problem for testing their methodology and tools. As a first step towards the detection and characterization of illicit procurement networks, our research examines procurement networks as defined by a system of entities (people or companies) that enter into transactions of specific items with one another. Once we have defined such networks, we are interested in answering questions about the behavior and characterization of such networks. The questions we wish to answer regarding procurement networks are, first, “Can we detect networks within large, noisy datasets?” and second, “To what extent can we compare multiple networks and identify their similarities?” As procurement networks can be naturally viewed as a graph, we have employed several graph analytic tools to aid in these tasks. In particular, Graphscape, an SDI tool, uses a novel method to approximate edit distance, a graph distance measure based on the number of changes needed to transform one graph into another, in order to measure how similar two given graphs are to each other. Given a set of graphs where vertices represent companies and edges represent a shipment from company A to company B, we can calculate an all-for-all comparison of graphs. In this way, we are able to determine which graphs are most similar, and which require more changes to transform one into the other. The set of graphs to be compared can be further specialized to provide more insight, e.g., using different time periods to explore events in a company life cycle.« less

  6. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming.

    PubMed

    Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina

    2012-05-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.

  7. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    PubMed Central

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  8. Progress Report for March, 1964; Cambridge Conference on School Mathematics; Feasibility Study No. 33.

    ERIC Educational Resources Information Center

    Lomon, Earle

    This report gives information regarding the mathematical classroom activities for the first six grades at Estabrook School from March 1964 to June 1965. A brief progress report is given regarding the instruction provided to teach such concepts as addition and subtraction, symmetry transformations of squares, open sentences and graphing,…

  9. Local Subspace Classifier with Transform-Invariance for Image Classification

    NASA Astrophysics Data System (ADS)

    Hotta, Seiji

    A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.

  10. Determination of accuracy of winding deformation method using kNN based classifier used for 3 MVA transformer

    NASA Astrophysics Data System (ADS)

    Ahmed, Mustafa Wasir; Baishya, Manash Jyoti; Sharma, Sasanka Sekhor; Hazarika, Manash

    2018-04-01

    This paper presents a detecting system on power transformer in transformer winding, core and on load tap changer (OLTC). Accuracy of winding deformation is determined using kNN based classifier. Winding deformation in power transformer can be measured using sweep frequency response analysis (SFRA), which can enhance the diagnosis accuracy to a large degree. It is suggested that in the results minor deformation faults can be detected at frequency range of 1 mHz to 2 MHz. The values of RCL parameters are changed when faults occur and hence frequency response of the winding will change accordingly. The SFRA data of tested transformer is compared with reference trace. The difference between two graphs indicate faults in the transformer. The deformation between 1 mHz to 1kHz gives winding deformation, 1 kHz to 100 kHz gives core deformation and 100 kHz to 2 MHz gives OLTC deformation.

  11. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  12. Implementation of artificial intelligence rules in a data base management system

    NASA Technical Reports Server (NTRS)

    Feyock, S.

    1986-01-01

    The intelligent front end prototype was transformed into a RIM-integrated system. A RIM-based expert system was written which demonstrated the developed capability. The use of rules to produce extensibility of the intelligent front end, including the concept of demons and rule manipulation rules were investigated. Innovative approaches such as syntax programming were to be considered.

  13. Integrating concepts and skills: Slope and kinematics graphs

    NASA Astrophysics Data System (ADS)

    Tonelli, Edward P., Jr.

    The concept of force is a foundational idea in physics. To predict the results of applying forces to objects, a student must be able to interpret data representing changes in distance, time, speed, and acceleration. Comprehension of kinematics concepts requires students to interpret motion graphs, where rates of change are represented as slopes of line segments. Studies have shown that majorities of students who show proficiency with mathematical concepts fail accurately to interpret motion graphs. The primary aim of this study was to examine how students apply their knowledge of slope when interpreting kinematics graphs. To answer the research questions a mixed methods research design, which included a survey and interviews, was adopted. Ninety eight (N=98) high school students completed surveys which were quantitatively analyzed along with qualitative information collected from interviews of students (N=15) and teachers ( N=2). The study showed that students who recalled methods for calculating slopes and speeds calculated slopes accurately, but calculated speeds inaccurately. When comparing the slopes and speeds, most students resorted to calculating instead of visual inspection. Most students recalled and applied memorized rules. Students who calculated slopes and speeds inaccurately failed to recall methods of calculating slopes and speeds, but when comparing speeds, these students connected the concepts of distance and time to the line segments and the rates of change they represented. This study's findings will likely help mathematics and science educators to better assist their students to apply their knowledge of the definition of slope and skills in kinematics concepts.

  14. X-1A in flight with flight data superimposed

    NASA Image and Video Library

    1953-12-12

    This photo of the X-1A includes graphs of the flight data from Maj. Charles E. Yeager's Mach 2.44 flight on December 12, 1953. (This was only a few days short of the 50th anniversary of the Wright brothers' first powered flight.) After reaching Mach 2.44, then the highest speed ever reached by a piloted aircraft, the X-1A tumbled completely out of control. The motions were so violent that Yeager cracked the plastic canopy with his helmet. He finally recovered from a inverted spin and landed on Rogers Dry Lakebed. Among the data shown are Mach number and altitude (the two top graphs). The speed and altitude changes due to the tumble are visible as jagged lines. The third graph from the bottom shows the G-forces on the airplane. During the tumble, these twice reached 8 Gs or 8 times the normal pull of gravity at sea level. (At these G forces, a 200-pound human would, in effect, weigh 1,600 pounds if a scale were placed under him in the direction of the force vector.) Producing these graphs was a slow, difficult process. The raw data from on-board instrumentation recorded on oscillograph film. Human computers then reduced the data and recorded it on data sheets, correcting for such factors as temperature and instrument errors. They used adding machines or slide rules for their calculations, pocket calculators being 20 years in the future.

  15. Model-based occluded object recognition using Petri nets

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Hura, Gurdeep S.

    1998-09-01

    This paper discusses the use of Petri nets to model the process of the object matching between an image and a model under different 2D geometric transformations. This transformation finds its applications in sensor-based robot control, flexible manufacturing system and industrial inspection, etc. A description approach for object structure is presented by its topological structure relation called Point-Line Relation Structure (PLRS). It has been shown how Petri nets can be used to model the matching process, and an optimal or near optimal matching can be obtained by tracking the reachability graph of the net. The experiment result shows that object can be successfully identified and located under 2D transformation such as translations, rotations, scale changes and distortions due to object occluded partially.

  16. The TSP-approach to approximate solving the m-Cycles Cover Problem

    NASA Astrophysics Data System (ADS)

    Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana

    2016-10-01

    In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.

  17. On the Nature of Syntactic Irregularity.

    ERIC Educational Resources Information Center

    Lakoff, George

    This dissertation is an attempt to characterize the notion "exception to a rule of grammar" within the context of Chomsky's conception of grammar as given in "Aspects of the Theory of Syntax." This notion depends on a prior notion of "rule government"--in each phrase marker on which a transformational rule may…

  18. Transformers and the Electric Utility System

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2005-01-01

    For electric energy to get from the generating station to a home, it must pass through a transformer, a device that can change voltage levels easily. This article describes how transformers work, covering the following topics: (1) the magnetism-electricity link; (2) transformer basics; (3) the energy seesaw; (4) the turns ratio rule; and (5)…

  19. 76 FR 45471 - Energy Efficiency Standards for Distribution Transformers; Notice of Intent To Negotiate Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-29

    ... EERE-2010-BT-STD-0048] RIN 1904-AC04 Energy Efficiency Standards for Distribution Transformers; Notice...-type distribution transformers. The purpose of the subcommittee will be to discuss and, if possible, reach consensus on a proposed rule for the energy efficiency of distribution transformers, as authorized...

  20. Entraining the topology and the dynamics of a network of phase oscillators

    NASA Astrophysics Data System (ADS)

    Sendiña-Nadal, I.; Leyva, I.; Buldú, J. M.; Almendral, J. A.; Boccaletti, S.

    2009-04-01

    We show that the topology and dynamics of a network of unsynchronized Kuramoto oscillators can be simultaneously controlled by means of a forcing mechanism which yields a phase locking of the oscillators to that of an external pacemaker in connection with the reshaping of the network’s degree distribution. The entrainment mechanism is based on the addition, at regular time intervals, of unidirectional links from oscillators that follow the dynamics of a pacemaker to oscillators in the pristine graph whose phases hold a prescribed phase relationship. Such a dynamically based rule in the attachment process leads to the emergence of a power-law shape in the final degree distribution of the graph whenever the network is entrained to the dynamics of the pacemaker. We show that the arousal of a scale-free distribution in connection with the success of the entrainment process is a robust feature, characterizing different networks’ initial configurations and parameters.

  1. Gamma ray interaction studies of organic nonlinear optical materials in the energy range 122 keV-1330 keV

    NASA Astrophysics Data System (ADS)

    Awasarmol, V. V.; Gaikwad, D. K.; Raut, S. D.; Pawar, P. P.

    The mass attenuation coefficients (μm) for organic nonlinear optical materials measured at 122-1330 keV photon energies were investigated on the basis of mixture rule and compared with obtained values of WinXCOM program. It is observed that there is a good agreement between theoretical and experimental values of the samples. All samples were irradiated with six radioactive sources such as 57Co, 133Ba, 22Na, 137Cs, 54Mn and 60Co using transmission arrangement. Effective atomic and electron numbers or electron densities (Zeff and Neff), molar extinction coefficient (ε), mass energy absorption coefficient (μen/ρ) and effective atomic energy absorption cross section (σa,en) were determined experimentally and theoretically using the obtained μm values for investigated samples and graphs have been plotted. The graph shows that the variation of all samples decreases with increasing photon energy.

  2. DiversePathsJ: diverse shortest paths for bioimage analysis.

    PubMed

    Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael

    2018-02-01

    We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  3. Surface Hold Advisor Using Critical Sections

    NASA Technical Reports Server (NTRS)

    Law, Caleb Hoi Kei (Inventor); Hsiao, Thomas Kun-Lung (Inventor); Mittler, Nathan C. (Inventor); Couluris, George J. (Inventor)

    2013-01-01

    The Surface Hold Advisor Using Critical Sections is a system and method for providing hold advisories to surface controllers to prevent gridlock and resolve crossing and merging conflicts among vehicles traversing a vertex-edge graph representing a surface traffic network on an airport surface. The Advisor performs pair-wise comparisons of current position and projected path of each vehicle with other surface vehicles to detect conflicts, determine critical sections, and provide hold advisories to traffic controllers recommending vehicles stop at entry points to protected zones around identified critical sections. A critical section defines a segment of the vertex-edge graph where vehicles are in crossing or merging or opposite direction gridlock contention. The Advisor detects critical sections without reference to scheduled, projected or required times along assigned vehicle paths, and generates hold advisories to prevent conflicts without requiring network path direction-of-movement rules and without requiring rerouting, rescheduling or other network optimization solutions.

  4. Constructing Temporally Extended Actions through Incremental Community Detection

    PubMed Central

    Li, Ge

    2018-01-01

    Hierarchical reinforcement learning works on temporally extended actions or skills to facilitate learning. How to automatically form such abstraction is challenging, and many efforts tackle this issue in the options framework. While various approaches exist to construct options from different perspectives, few of them concentrate on options' adaptability during learning. This paper presents an algorithm to create options and enhance their quality online. Both aspects operate on detected communities of the learning environment's state transition graph. We first construct options from initial samples as the basis of online learning. Then a rule-based community revision algorithm is proposed to update graph partitions, based on which existing options can be continuously tuned. Experimental results in two problems indicate that options from initial samples may perform poorly in more complex environments, and our presented strategy can effectively improve options and get better results compared with flat reinforcement learning. PMID:29849543

  5. Trading Rules on Stock Markets Using Genetic Network Programming with Reinforcement Learning and Importance Index

    NASA Astrophysics Data System (ADS)

    Mabu, Shingo; Hirasawa, Kotaro; Furuzuki, Takayuki

    Genetic Network Programming (GNP) is an evolutionary computation which represents its solutions using graph structures. Since GNP can create quite compact programs and has an implicit memory function, it has been clarified that GNP works well especially in dynamic environments. In addition, a study on creating trading rules on stock markets using GNP with Importance Index (GNP-IMX) has been done. IMX is a new element which is a criterion for decision making. In this paper, we combined GNP-IMX with Actor-Critic (GNP-IMX&AC) and create trading rules on stock markets. Evolution-based methods evolve their programs after enough period of time because they must calculate fitness values, however reinforcement learning can change programs during the period, therefore the trading rules can be created efficiently. In the simulation, the proposed method is trained using the stock prices of 10 brands in 2002 and 2003. Then the generalization ability is tested using the stock prices in 2004. The simulation results show that the proposed method can obtain larger profits than GNP-IMX without AC and Buy&Hold.

  6. Combining human and machine intelligence to derive agents' behavioral rules for groundwater irrigation

    NASA Astrophysics Data System (ADS)

    Hu, Yao; Quinn, Christopher J.; Cai, Ximing; Garfinkle, Noah W.

    2017-11-01

    For agent-based modeling, the major challenges in deriving agents' behavioral rules arise from agents' bounded rationality and data scarcity. This study proposes a "gray box" approach to address the challenge by incorporating expert domain knowledge (i.e., human intelligence) with machine learning techniques (i.e., machine intelligence). Specifically, we propose using directed information graph (DIG), boosted regression trees (BRT), and domain knowledge to infer causal factors and identify behavioral rules from data. A case study is conducted to investigate farmers' pumping behavior in the Midwest, U.S.A. Results show that four factors identified by the DIG algorithm- corn price, underlying groundwater level, monthly mean temperature and precipitation- have main causal influences on agents' decisions on monthly groundwater irrigation depth. The agent-based model is then developed based on the behavioral rules represented by three DIGs and modeled by BRTs, and coupled with a physically-based groundwater model to investigate the impacts of agents' pumping behavior on the underlying groundwater system in the context of coupled human and environmental systems.

  7. Quality Leadership and Quality Control

    PubMed Central

    Badrick, Tony

    2003-01-01

    Different quality control rules detect different analytical errors with varying levels of efficiency depending on the type of error present, its prevalence and the number of observations. The efficiency of a rule can be gauged by inspection of a power function graph. Control rules are only part of a process and not an end in itself; just as important are the trouble-shooting systems employed when a failure occurs. 'Average of patient normals' may develop as a usual adjunct to conventional quality control serum based programmes. Acceptable error can be based on various criteria; biological variation is probably the most sensible. Once determined, acceptable error can be used as limits in quality control rule systems. A key aspect of an organisation is leadership, which links the various components of the quality system. Leadership is difficult to characterise but its key aspects include trust, setting an example, developing staff and critically setting the vision for the organisation. Organisations also have internal characteristics such as the degree of formalisation, centralisation, and complexity. Medical organisations can have internal tensions because of the dichotomy between the bureaucratic and the shadow medical structures. PMID:18568046

  8. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    PubMed

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.

  9. Poor textural image tie point matching via graph theory

    NASA Astrophysics Data System (ADS)

    Yuan, Xiuxiao; Chen, Shiyu; Yuan, Wei; Cai, Yang

    2017-07-01

    Feature matching aims to find corresponding points to serve as tie points between images. Robust matching is still a challenging task when input images are characterized by low contrast or contain repetitive patterns, occlusions, or homogeneous textures. In this paper, a novel feature matching algorithm based on graph theory is proposed. This algorithm integrates both geometric and radiometric constraints into an edge-weighted (EW) affinity tensor. Tie points are then obtained by high-order graph matching. Four pairs of poor textural images covering forests, deserts, bare lands, and urban areas are tested. For comparison, three state-of-the-art matching techniques, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), and features from accelerated segment test (FAST), are also used. The experimental results show that the matching recall obtained by SIFT, SURF, and FAST varies from 0 to 35% in different types of poor textures. However, through the integration of both geometry and radiometry and the EW strategy, the recall obtained by the proposed algorithm is better than 50% in all four image pairs. The better matching recall improves the number of correct matches, dispersion, and positional accuracy.

  10. Left ventricle segmentation via graph cut distribution matching.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron

    2009-01-01

    We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.

  11. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  12. Investigation of Learning Behaviors and Achievement of Vocational High School Students Using an Ubiquitous Physics Tablet PC App

    NASA Astrophysics Data System (ADS)

    Purba, Siska Wati Dewi; Hwang, Wu-Yuin

    2017-06-01

    In this study, we designed and developed an app called Ubiquitous-Physics (U-Physics) for mobile devices like tablet PC or smart phones to help students learn the principles behind a simple pendulum in Physics. The unique characteristic of U-Physics is the use of sensors on mobile devices to collect acceleration and velocity data during pendulum swings. The data collected are transformed to facilitate students' understanding of the pendulum time period. U-Physics helped students understand the effects of pendulum mass, length, and angle in relation to its time period. In addition, U-Physics was equipped with an annotation function such as textual annotation to help students interpret and understand the concepts and phenomena of the simple pendulum. U-Physics also generated graphs automatically to demonstrate the time period during which the pendulum was swinging. Results showed a significant positive correlation between interpreting graphs and applying formula. This finding indicated that the ability to interpret graphs has an important role in scientific learning. Therefore, we strongly recommend that physics teachers use graphs to enrich students' information content and understanding and negative correlation between pair coherence and interpreting graphs. It may be that most of the participants (vocational high school students) have limited skill or confidence in physics problem solving; so, they often seek help from teachers or their high-achieving peers. In addition, the findings also indicated that U-Physics can enhance students' achievement during a 3-week time period. We hope that this app can be globally used to learn physics in the future.

  13. Graph Theoretic Foundations of Multibody Dynamics Part I: Structural Properties

    PubMed Central

    Jain, Abhinandan

    2011-01-01

    This is the first part of two papers that use concepts from graph theory to obtain a deeper understanding of the mathematical foundations of multibody dynamics. The key contribution is the development of a unifying framework that shows that key analytical results and computational algorithms in multibody dynamics are a direct consequence of structural properties and require minimal assumptions about the specific nature of the underlying multibody system. This first part focuses on identifying the abstract graph theoretic structural properties of spatial operator techniques in multibody dynamics. The second part paper exploits these structural properties to develop a broad spectrum of analytical results and computational algorithms. Towards this, we begin with the notion of graph adjacency matrices and generalize it to define block-weighted adjacency (BWA) matrices and their 1-resolvents. Previously developed spatial operators are shown to be special cases of such BWA matrices and their 1-resolvents. These properties are shown to hold broadly for serial and tree topology multibody systems. Specializations of the BWA and 1-resolvent matrices are referred to as spatial kernel operators (SKO) and spatial propagation operators (SPO). These operators and their special properties provide the foundation for the analytical and algorithmic techniques developed in the companion paper. We also use the graph theory concepts to study the topology induced sparsity structure of these operators and the system mass matrix. Similarity transformations of these operators are also studied. While the detailed development is done for the case of rigid-link multibody systems, the extension of these techniques to a broader class of systems (e.g. deformable links) are illustrated. PMID:22102790

  14. Application of Graph Theory in an Intelligent Tutoring System for Solving Mathematical Word Problems

    ERIC Educational Resources Information Center

    Nabiyev, Vasif V.; Çakiroglu, Ünal; Karal, Hasan; Erümit, Ali K.; Çebi, Ayça

    2016-01-01

    This study is aimed to construct a model to transform word "motion problems" in to an algorithmic form in order to be processed by an intelligent tutoring system (ITS). First; categorizing the characteristics of motion problems, second; suggesting a model for the categories were carried out. In order to solve all categories of the…

  15. Cheating Heisenberg: Achieving certainty in wideband spectrography

    NASA Astrophysics Data System (ADS)

    Fulop, Sean

    2003-10-01

    The spectrographic analysis of sound has been with us some 58 years, and one of the key properties of the process is the trade-off in resolution between the time and frequency dimensions in the computed graph. While spectrography has greatly advanced the development of phonetics, the uncertainty principle has always been a source of frustration to phoneticians because so many of the interesting features of speech must be observed by computing Fourier spectra over very short time frames-i.e., using a ``wideband'' spectrogram. Since the uncertainty relation between time and frequency is unbreakable, the only option for improvement is to make a new kind of spectrogram that does not graph time and frequency. An algorithm is described and demonstrated which computes a new kind of spectrogram in which Fourier transform frequency is replaced by the channelized instantaneous frequency, and time is adjusted by the local group delay. The theory behind this procedure was clarified in Nelson [J. Acoust. Soc. Am. 110, 2575-2592 (2001)]. The resulting wideband spectrograms show dramatically improved resolution of speech features, which will be demonstrated with sample figures. It is thus suggested that phoneticians should be more interested in the instantaneous frequency spectrum than in the Fourier transform.

  16. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  17. Electronic structures of elements according to ionization energies.

    PubMed

    Zadeh, Dariush H

    2017-11-28

    The electronic structures of elements in the periodic table were analyzed using available experimental ionization energies. Two new parameters were defined to carry out the study. The first parameter-apparent nuclear charge (ANC)-quantified the overall charge of the nucleus and inner electrons observed by an outer electron during the ionization process. This parameter was utilized to define a second parameter, which presented the shielding ability of an electron against the nuclear charge. This second parameter-electron shielding effect (ESE)-provided an insight into the electronic structure of atoms. This article avoids any sort of approximation, interpolation or extrapolation. First experimental ionization energies were used to obtain the two aforementioned parameters. The second parameter (ESE) was then graphed against the electron number of each element, and was used to read the corresponding electronic structure. The ESE showed spikes/peaks at the end of each electronic shell, providing insight into when an electronic shell closes and a new one starts. The electronic structures of elements in the periodic table were mapped using this methodology. These graphs did not show complete agreement with the previously known "Aufbau" filling rule. A new filling rule was suggested based on the present observations. Finally, a new way to organize elements in the periodic table is suggested. Two earlier topics of effective nuclear charge, and shielding factor were also briefly discussed and compared numerically to demonstrate the capability of the new approach.

  18. The Shrinkage Model And Expert System Of Plastic Lens Formation

    NASA Astrophysics Data System (ADS)

    Chang, Rong-Seng

    1988-06-01

    Shrinkage causes both the appearance & dimension defects of the injected plastic lens. We have built up a model of state equations with the help of finite element analysis program to estimate the volume change (shrinkage and swelling) under the combinations of injection variables such as pressure and temperature etc., then the personal computer expert system has been build up to make that knowledge conveniently available to the user in the model design, process planning, process operation and some other work. The domain knowledge is represented by a R-graph (Relationship-graph) model which states the relationships of variables & equations. This model could be compare with other models in the expert system. If the user has better model to solve the shrinkage problem, the program will evaluate it automatically and a learning file will be trigger by the expert system to teach the user to update their knowledge base and modify the old model by this better model. The Rubin's model and Gilmore's model have been input to the expert system. The conflict has been solved both from the user and the deeper knowledge base. A cube prism and the convex lens examples have been shown in this paper. This program is written by MULISP language in IBM PC-AT. The natural language provides English Explaination of know why and know how and the automatic English translation for the equation rules and the production rules.

  19. Wallace State's New Rules of Business: Affirming the Truths of Intentional Transformation

    ERIC Educational Resources Information Center

    Johnson, Mell

    2007-01-01

    Wallace State Community College in Hanceville, Alabama, took the Community College Futures Assembly challenge for the 2006 Bellwether Award from FAST COMPANY's release of "The Rules of Business: Timeless Truths from the Best Minds in Business" to identify its own substantive question for this year's competition: "The New Rules of…

  20. The effect of velocity slip and multiple convective boundary conditions in a Darcian porous media with microorganism past a vertical stretching/shrinking sheet

    NASA Astrophysics Data System (ADS)

    Latiff, Nur Amalina Abdul; Yahya, Elisa; Ismail, Ahmad Izani Md.; Amirsom, Ardiana; Basir, Faisal

    2017-08-01

    An analysis is carried out to study the steady mixed convective boundary layer flow of a nanofluid in a Darcian porous media with microorganisms past a vertical stretching/shrinking sheet. Heat generation/absorption and chemical reaction effects are incorporated in the model. The partial differential equations are transformed into a system of ordinary differential equations by using similarity transformations generated by scaling group transformations. The transformed equations with boundary conditions are solved numerically. The effects of controlling parameters such as velocity slip, Darcy number, heat generation/absorption and chemical reaction on the skin friction factor, heat transfer, mass transfer and microorganism transfer are shown and discuss through graphs. Comparison of numerical solutions in the present study with the previous existing results in literature are made and comparison results are in very good agreement.

  1. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-07-17

    The explosion of biological data has largely influenced the focus of today’s biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  2. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-06-01

    The explosion of biological data has largely influenced the focus of today's biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  3. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas

    2014-09-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.

  4. Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.

    PubMed

    Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume

    2015-09-14

    Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.

  5. Unsupervised spatiotemporal analysis of fMRI data using graph-based visualizations of self-organizing maps.

    PubMed

    Katwal, Santosh B; Gore, John C; Marois, Rene; Rogers, Baxter P

    2013-09-01

    We present novel graph-based visualizations of self-organizing maps for unsupervised functional magnetic resonance imaging (fMRI) analysis. A self-organizing map is an artificial neural network model that transforms high-dimensional data into a low-dimensional (often a 2-D) map using unsupervised learning. However, a postprocessing scheme is necessary to correctly interpret similarity between neighboring node prototypes (feature vectors) on the output map and delineate clusters and features of interest in the data. In this paper, we used graph-based visualizations to capture fMRI data features based upon 1) the distribution of data across the receptive fields of the prototypes (density-based connectivity); and 2) temporal similarities (correlations) between the prototypes (correlation-based connectivity). We applied this approach to identify task-related brain areas in an fMRI reaction time experiment involving a visuo-manual response task, and we correlated the time-to-peak of the fMRI responses in these areas with reaction time. Visualization of self-organizing maps outperformed independent component analysis and voxelwise univariate linear regression analysis in identifying and classifying relevant brain regions. We conclude that the graph-based visualizations of self-organizing maps help in advanced visualization of cluster boundaries in fMRI data enabling the separation of regions with small differences in the timings of their brain responses.

  6. a Framework for AN Automatic Seamline Engine

    NASA Astrophysics Data System (ADS)

    Al-Durgham, M.; Downey, M.; Gehrke, S.; Beshah, B. T.

    2016-06-01

    Seamline generation is a crucial last step in the ortho-image mosaicking process. In particular, it is required to convolute residual geometric and radiometric imperfections that stem from various sources. In particular, temporal differences in the acquired data will cause the scene content and illumination conditions to vary. These variations can be modelled successfully. However, one is left with micro-differences that do need to be considered in seamline generation. Another cause of discrepancies originates from the rectification surface as it will not model the actual terrain and especially human-made objects perfectly. Quality of the image orientation will also contribute to the overall differences between adjacent ortho-rectified images. Our approach takes into consideration the aforementioned differences in designing a seamline engine. We have identified the following essential behaviours of the seamline in our engine: 1) Seamlines must pass through the path of least resistance, i.e., overlap areas with low radiometric differences. 2) Seamlines must not intersect with breaklines as that will lead to visible geometric artefacts. And finally, 3), shorter seamlines are generally favourable; they also result in faster operator review and, where necessary, interactive editing cycles. The engine design also permits alteration of the above rules for special cases. Although our preliminary experiments are geared towards line imaging systems (i.e., the Leica ADS family), our seamline engine remains sensor agnostic. Hence, our design is capable of mosaicking images from various sources with minimal effort. The main idea behind this engine is using graph cuts which, in spirit, is based of the max-flow min-cut theory. The main advantage of using graph cuts theory is that the generated solution is global in the energy minimization sense. In addition, graph cuts allows for a highly scalable design where a set of rules contribute towards a cost function which, in turn, influences the path of minimum resistance for the seamlines. In this paper, the authors present an approach for achieving quality seamlines relatively quickly and with emphasis on generating truly seamless ortho-mosaics.

  7. 76 FR 57007 - Efficiency and Renewables Advisory Committee, Appliance Standards Subcommittee, Negotiated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... Distribution Transformers AGENCY: Department of Energy, Office of Energy Efficiency and Renewable Energy... Rulemaking Working Group for Low-Voltage Dry-Type Distribution Transformers (hereafter ``LV Group''). The LV... proposed rule for regulating the energy efficiency of distribution transformers, as authorized by the...

  8. 75 FR 652 - Energy Conservation Program: Certification, Compliance, and Enforcement Requirements for Certain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... manufacturer certification for distribution transformers. DATES: This rule is effective February 4, 2010 except... 2005--Commercial Equipment D. Distribution Transformers E. General Requirements IV. Procedural... distribution transformers that DOE proposed in the July 2006 NOPR. II. Summary of Today's Action DOE adopts...

  9. Characterizing complex networks through statistics of Möbius transformations

    NASA Astrophysics Data System (ADS)

    Jaćimović, Vladimir; Crnkić, Aladin

    2017-04-01

    It is well-known now that dynamics of large populations of globally (all-to-all) coupled oscillators can be reduced to low-dimensional submanifolds (WS transformation and OA ansatz). Marvel et al. (2009) described an intriguing algebraic structure standing behind this reduction: oscillators evolve by the action of the group of Möbius transformations. Of course, dynamics in complex networks of coupled oscillators is highly complex and not reducible. Still, closer look unveils that even in complex networks some (possibly overlapping) groups of oscillators evolve by Möbius transformations. In this paper, we study properties of the network by identifying Möbius transformations in the dynamics of oscillators. This enables us to introduce some new (statistical) concepts that characterize the network. In particular, the notion of coherence of the network (or subnetwork) is proposed. This conceptual approach is meaningful for the broad class of networks, including those with time-delayed, noisy or mixed interactions. In this paper, several simple (random) graphs are studied illustrating the meaning of the concepts introduced in the paper.

  10. Relativistic corrections to a generalized sum rule

    NASA Astrophysics Data System (ADS)

    Sinky, H.; Leung, P. T.

    2006-09-01

    Relativistic corrections to a previously established generalized sum rule are obtained using the Foldy-Wouthysen transformation. This sum rule derived previously by Wang [Phys. Rev. A 60, 262 (1999)] for a nonrelativistic system contains both the well-known Thomas-Reiche-Kuhn and Bethe sum rules, for which relativistic corrections have been obtained in the literature. Our results for the generalized formula will be applied to recover several results obtained previously in the literature, as well as to another sum rule whose relativistic corrections will be obtained.

  11. Automating the Transformational Development of Software. Volume 1.

    DTIC Science & Technology

    1983-03-01

    DRACO system [Neighbors 80] uses meta-rules to derive information about which new transformations will be applicable after a particular transformation has...transformation over another. The new model, as Incorporated in a system called Glitter, explicitly represents transformation goals, methods, and selection...done anew for each new problem (compare this with Neighbor’s Draco system [Neighbors 80] which attempts to reuse domain analysis). o Is the user

  12. Quadratic stabilisability of multi-agent systems under switching topologies

    NASA Astrophysics Data System (ADS)

    Guan, Yongqiang; Ji, Zhijian; Zhang, Lin; Wang, Long

    2014-12-01

    This paper addresses the stabilisability of multi-agent systems (MASs) under switching topologies. Necessary and/or sufficient conditions are presented in terms of graph topology. These conditions explicitly reveal how the intrinsic dynamics of the agents, the communication topology and the external control input affect stabilisability jointly. With the appropriate selection of some agents to which the external inputs are applied and the suitable design of neighbour-interaction rules via a switching topology, an MAS is proved to be stabilisable even if so is not for each of uncertain subsystem. In addition, a method is proposed to constructively design a switching rule for MASs with norm-bounded time-varying uncertainties. The switching rules designed via this method do not rely on uncertainties, and the switched MAS is quadratically stabilisable via decentralised external self-feedback for all uncertainties. With respect to applications of the stabilisability results, the formation control and the cooperative tracking control are addressed. Numerical simulations are presented to demonstrate the effectiveness of the proposed results.

  13. A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system

    NASA Astrophysics Data System (ADS)

    Hadi, M. Z.; Djatna, T.; Sugiarto

    2018-04-01

    This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.

  14. Development of an expert system for analysis of Shuttle atmospheric revitalization and pressure control subsystem anomalies

    NASA Technical Reports Server (NTRS)

    Lafuse, Sharon A.

    1991-01-01

    The paper describes the Shuttle Leak Management Expert System (SLMES), a preprototype expert system developed to enable the ECLSS subsystem manager to analyze subsystem anomalies and to formulate flight procedures based on flight data. The SLMES combines the rule-based expert system technology with the traditional FORTRAN-based software into an integrated system. SLMES analyzes the data using rules, and, when it detects a problem that requires simulation, it sets up the input for the FORTRAN-based simulation program ARPCS2AT2, which predicts the cabin total pressure and composition as a function of time. The program simulates the pressure control system, the crew oxygen masks, the airlock repress/depress valves, and the leakage. When the simulation has completed, other SLMES rules are triggered to examine the results of simulation contrary to flight data and to suggest methods for correcting the problem. Results are then presented in form of graphs and tables.

  15. An Integrated Ransac and Graph Based Mismatch Elimination Approach for Wide-Baseline Image Matching

    NASA Astrophysics Data System (ADS)

    Hasheminasab, M.; Ebadi, H.; Sedaghat, A.

    2015-12-01

    In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.

  16. Determining the Basis of Homodesmotic Reactions of Cyclic Organic Compounds by Means of Graph Theory

    NASA Astrophysics Data System (ADS)

    Khursan, S. L.; Ismagilova, A. S.; Akhmetyanova, A. I.

    2018-07-01

    Comparative calculations based on the use of a homodesmotic reaction (HDR)—an isodesmic process with the additional requirement for group balance—is used to analyze the thermochemical characteristics of cyclic organic compounds exemplified by bicyclo[2.1.0]pentene-2. To avoid confusion in selecting HDRs, an algorithm is developed for determining the HDR basis, i.e., the set of all possible independent homodesmotic reactions. The algorithm for constructing the set of HDRs is based on an analysis and transformations of the bond graph of groups for the investigated chemical compound. The use of graph theory allows us to automate the procedure for deriving the basis of homodesmotic reactions, and to obtain a visual geometric interpretation of the basis, which is important for subsequent physicochemical analysis. The energetics of bicyclo[2.1.0]pentene-2 is investigated using the proposed approach, and the independent basis of HDRs is found to include 19 formal transformations. Standard enthalpies for the test compound and the participants of homodesmotic reactions are calculated using the G3 composite approach. Thermochemical analysis of the obtained data allows us to determine the standard enthalpy of formation of the bicycle (Δf H° = 336.4 kJ/mol) and value Δf H° of a number of cyclic and acyclic alkenes and alkadienes that are products of theoretical decomposition of the test compound. The proposed method is shown to be extremely effective in analyzing the effects of nonbonded interactions in the structure of organic molecules. The ring strain energy of the bicycle is calculated or the test compound: E S = 295.2± 2.2 kJ/mol.

  17. Nonlinear Dynamics of River Runoff Elucidated by Horizontal Visibility Graphs

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Rosso, Osvaldo A.

    2017-04-01

    We investigate a set of long-term river runoff time series at daily resolution from Brazil, monitored by the Agencia Nacional de Aguas. A total of 150 time series was obtained, with an average length of 65 years. Both long-term trends and human influence (water management, e.g. for power production) on the dynamical behaviour are analyzed. We use Horizontal Visibility Graphs (HVGs) to determine the individual temporal networks for the time series, and extract their degree and their distance (shortest path length) distributions. Statistical and information-theoretic properties of these distributions are calculated: robust estimators of skewness and kurtosis, the maximum degree occurring in the time series, the Shannon entropy, permutation complexity and Fisher Information. For the latter, we also compare the information measures obtained from the degree distributions to those using the original time series directly, to investigate the impact of graph construction on the dynamical properties as reflected in these measures. Focus is on one hand on universal properties of the HVG, common to all runoff series, and on site-specific aspects on the other. Results demonstrate that the assumption of power law behaviour for the degree distribtion does not generally hold, and that management has a significant impact on this distribution. We also show that a specific pretreatment of the time series conventional in hydrology, the elimination of seasonality by a separate z-transformation for each calendar day, is highly detrimental to the nonlinear behaviour. It changes long-term correlations and the overall dynamics towards more random behaviour. Analysis based on the transformed data easily leads to spurious results, and bear a high risk of misinterpretation.

  18. 76 FR 63566 - Efficiency and Renewables Advisory Committee, Appliance Standards Subcommittee, Negotiated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... Medium- and Low-Voltage Dry-Type Distribution Transformers AGENCY: Department of Energy, Office of Energy... Dry-Type and the second addressing Low-Voltage Dry-Type Distribution Transformers. The Liquid Immersed... proposed rule for regulating the energy efficiency of distribution transformers, as authorized by the...

  19. Evolutionary games on cycles with strong selection

    NASA Astrophysics Data System (ADS)

    Altrock, P. M.; Traulsen, A.; Nowak, M. A.

    2017-02-01

    Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast in a dominance game, that fixation takes very long in a coexistence game, and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), then strong selection can remove demographic noise almost entirely.

  20. Dynamical Graph Theory Networks Methods for the Analysis of Sparse Functional Connectivity Networks and for Determining Pinning Observability in Brain Networks

    PubMed Central

    Meyer-Bäse, Anke; Roberts, Rodney G.; Illan, Ignacio A.; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja

    2017-01-01

    Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts. PMID:29051730

  1. Dynamical Graph Theory Networks Methods for the Analysis of Sparse Functional Connectivity Networks and for Determining Pinning Observability in Brain Networks.

    PubMed

    Meyer-Bäse, Anke; Roberts, Rodney G; Illan, Ignacio A; Meyer-Bäse, Uwe; Lobbes, Marc; Stadlbauer, Andreas; Pinker-Domenig, Katja

    2017-01-01

    Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts.

  2. Graph theoretical modeling of baby brain networks.

    PubMed

    Zhao, Tengda; Xu, Yuehua; He, Yong

    2018-06-12

    The human brain undergoes explosive growth during the prenatal period and the first few postnatal years, establishing an early infrastructure for the later development of behaviors and cognitions. Revealing the developmental rules during the early phrase is essential in understanding the emergence of brain function and the origin of developmental disorders. The graph-theoretical network modeling in combination with multiple neuroimaging probes provides an important research framework to explore early development of the topological wiring and organizational paradigms of the brain. Here, we reviewed studies which employed neuroimaging and graph-theoretical modeling to investigate brain network development from approximately 20 gestational weeks to 2 years of age. Specifically, the structural and functional brain networks have evolved to highly efficient topological architectures in the early stage; where the structural network remains ahead and paves the way for the development of functional network. The brain network develops in a heterogeneous order, from primary to higher-order systems and from a tendency of network segregation to network integration in the prenatal and postnatal periods. The early brain network topologies show abilities in predicting certain cognitive and behavior performance in later life, and their impairments are likely to continue into childhood and even adulthood. These macroscopic topological changes are found to be associated with possible microstructural maturations, such as axonal growth and myelinations. Collectively, this review provides a detailed delineation of the early changes of the baby brains in the graph-theoretical modeling framework, which opens up a new avenue to understand the developmental principles of the connectome. Copyright © 2018. Published by Elsevier Inc.

  3. A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifier.

    PubMed

    Coustaty, M; Bertet, K; Visani, M; Ogier, J

    2011-08-01

    In this paper, we propose a new approach for symbol recognition using structural signatures and a Galois lattice as a classifier. The structural signatures are based on topological graphs computed from segments which are extracted from the symbol images by using an adapted Hough transform. These structural signatures-that can be seen as dynamic paths which carry high-level information-are robust toward various transformations. They are classified by using a Galois lattice as a classifier. The performance of the proposed approach is evaluated based on the GREC'03 symbol database, and the experimental results we obtain are encouraging.

  4. Transforming Graph Data for Statistical Relational Learning

    DTIC Science & Technology

    2012-10-01

    Jordan, 2003), PLSA (Hofmann, 1999), ? Classification via RMN (Taskar et al., 2003) or SVM (Hasan, Chaoji, Salem , & Zaki, 2006) ? Hierarchical...dimensionality reduction methods such as Principal 407 Rossi, McDowell, Aha, & Neville Component Analysis (PCA), Principal Factor Analysis ( PFA ), and...clustering algorithm. Journal of the Royal Statistical Society. Series C, Applied statistics, 28, 100–108. Hasan, M. A., Chaoji, V., Salem , S., & Zaki, M

  5. Suggestions to authors of the reports of the United States Geological Survey

    USGS Publications Warehouse

    Hansen, Wallace R.

    1991-01-01

    Suggestions to Authors (STA) is used as the writing style guide for the U.S. Geological Survey (USGS) technical reports and maps. The STA is widely distributed in paper outside of the USGS as a basic scientific writing style guide for scientists, students, and editors. The goal of STA is to help writers present information as clearly as possible explaining punctuation rules, suggesting phrasing, and offering examples of citations styles and outlining report organization, table and graph design, and details of map design.

  6. Optimum operation of restoration techniques for eutrophic water bodies

    NASA Astrophysics Data System (ADS)

    Hagen, N. M.; Kleeberg, H.-B.

    1994-05-01

    Operating rules have been applied in water resources management for a long time in order to control and supply a required quantity (volume) of water. The operating rules have to guarantee the optimum management of the reservoir(s). The quality of the stored water has been satisfactory for the desired utilization up to the sixties. Due to the deterioration of reservoir water quality through human impacts, however, increased attention had to be paid since. Eutrophication of stagnant waters is still an unsolved problem. Through means of various restoration techniques, i.e., dilution/flushing or hypolimnetic withdrawal, the quality of the stored water can be improved. Continuous operation or appropriate time or depth variant operating rules are required to achieve this goal. The paper presents such rules for long-term operation. They have been established for the first time and can he represented in two or three-dimensional graphs depending on the number of included components (e.g., actual water storage and quality). The ‘quality operating rules’ take into account the dynamics of the processes in aquatic ecosystems. Simplifications with regard to application and acceptance (e.g., clarity) are developed and tested. The general validity and efficiency of the operating rules have been proved in a case study (a multi-purpose reservoir) and a fictitious lake.

  7. Inferring the Limit Behavior of Some Elementary Cellular Automata

    NASA Astrophysics Data System (ADS)

    Ruivo, Eurico L. P.; de Oliveira, Pedro P. B.

    Cellular automata locally define dynamical systems, discrete in space, time and in the state variables, capable of displaying arbitrarily complex global emergent behavior. One core question in the study of cellular automata refers to their limit behavior, that is, to the global dynamical features in an infinite time evolution. Previous works have shown that for finite time evolutions, the dynamics of one-dimensional cellular automata can be described by regular languages and, therefore, by finite automata. Such studies have shown the existence of growth patterns in the evolution of such finite automata for some elementary cellular automata rules and also inferred the limit behavior of such rules based upon the growth patterns; however, the results on the limit behavior were obtained manually, by direct inspection of the structures that arise during the time evolution. Here we present the formalization of an automatic method to compute such structures. Based on this, the rules of the elementary cellular automata space were classified according to the existence of a growth pattern in their finite automata. Also, we present a method to infer the limit graph of some elementary cellular automata rules, derived from the analysis of the regular expressions that describe their behavior in finite time. Finally, we analyze some attractors of two rules for which we could not compute the whole limit set.

  8. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  9. Safety leadership at construction sites: the importance of rule-oriented and participative leadership.

    PubMed

    Grill, Martin; Pousette, Anders; Nielsen, Kent; Grytnes, Regine; Törner, Marianne

    2017-07-01

    Objectives The construction industry accounted for >20% of all fatal occupational accidents in Europe in 2014. Leadership is an essential antecedent to occupational safety. The aim of the present study was to assess the influence of transformational, active transactional, rule-oriented, participative, and laissez-faire leadership on safety climate, safety behavior, and accidents in the Swedish and Danish construction industry. Sweden and Denmark are similar countries but have a large difference in occupational accidents rates. Methods A questionnaire study was conducted among a random sample of construction workers in both countries: 811 construction workers from 85 sites responded, resulting in site and individual response rates of 73% and 64%, respectively. Results The results indicated that transformational, active transactional, rule-oriented and participative leadership predict positive safety outcomes, and laissez-faire leadership predict negative safety outcomes. For example, rule-oriented leadership predicts a superior safety climate (β=0.40, P<0.001), enhanced safety behavior (β=0.15, P<0.001), and fewer accidents [odds ratio (OR) 0.78, 95% confidence interval (95% CI) 0.62-0.98]. The effect of rule-oriented leadership on workers' safety behavior was moderated by the level of participative leadership (β=0.10, P<0.001), suggesting that when rules and plans are established in a collaborative manner, workers' motivation to comply with safety regulations and participate in proactive safety activities is elevated. The influence of leadership behaviors on safety outcomes were largely similar in Sweden and Denmark. Rule-oriented and participative leadership were more common in the Swedish than Danish construction industry, which may partly explain the difference in occupational accident rates. Conclusions Applying less laissez-faire leadership and more transformational, active transactional, participative and rule-oriented leadership appears to be an effective way for construction site managers to improve occupational safety in the industry.

  10. Automated crystallographic ligand building using the medial axis transform of an electron-density isosurface.

    PubMed

    Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T

    2005-10-01

    Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.

  11. Structured sparse linear graph embedding.

    PubMed

    Wang, Haixian

    2012-03-01

    Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. ``Models'' CAVEAT EMPTOR!!!: ``Toy Models Too-Often Yield Toy-Results''!!!: Statistics, Polls, Politics, Economics, Elections!!!: GRAPH/Network-Physics: ``Equal-Distribution for All'' TRUMP-ED BEC ``Winner-Take-All'' ``Doctor Livingston I Presume?''

    NASA Astrophysics Data System (ADS)

    Preibus-Norquist, R. N. C.-Grover; Bush-Romney, G. W.-Willard-Mitt; Dimon, J. P.; Adelson-Koch, Sheldon-Charles-David-Sheldon; Krugman-Axelrod, Paul-David; Siegel, Edward Carl-Ludwig; D. N. C./O. F. P./''47''%/50% Collaboration; R. N. C./G. O. P./''53''%/49% Collaboration; Nyt/Wp/Cnn/Msnbc/Pbs/Npr/Ft Collaboration; Ftn/Fnc/Fox/Wsj/Fbn Collaboration; Lb/Jpmc/Bs/Boa/Ml/Wamu/S&P/Fitch/Moodys/Nmis Collaboration

    2013-03-01

    ``Models''? CAVEAT EMPTOR!!!: ``Toy Models Too-Often Yield Toy-Results''!!!: Goldenfeld[``The Role of Models in Physics'', in Lects.on Phase-Transitions & R.-G.(92)-p.32-33!!!]: statistics(Silver{[NYTimes; Bensinger, ``Math-Geerks Clearly-Defeated Pundits'', LATimes, (11/9/12)])}, polls, politics, economics, elections!!!: GRAPH/network/net/...-PHYSICS Barabasi-Albert[RMP (02)] (r,t)-space VERSUS(???) [Where's the Inverse/ Dual/Integral-Transform???] (Benjamin)Franklin(1795)-Fourier(1795; 1897;1822)-Laplace(1850)-Mellin (1902) Brillouin(1922)-...(k,)-space, {Hubbard [The World According to Wavelets,Peters (96)-p.14!!!/p.246: refs.-F2!!!]},and then (2) Albert-Barabasi[]Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) versus Bianconi[pvt.-comm.; arXiv:cond-mat/0204506; ...] -Barabasi [???] Fermi-Dirac

  13. Multiple degree of freedom object recognition using optical relational graph decision nets

    NASA Technical Reports Server (NTRS)

    Casasent, David P.; Lee, Andrew J.

    1988-01-01

    Multiple-degree-of-freedom object recognition concerns objects with no stable rest position with all scale, rotation, and aspect distortions possible. It is assumed that the objects are in a fairly benign background, so that feature extractors are usable. In-plane distortion invariance is provided by use of a polar-log coordinate transform feature space, and out-of-plane distortion invariance is provided by linear discriminant function design. Relational graph decision nets are considered for multiple-degree-of-freedom pattern recognition. The design of Fisher (1936) linear discriminant functions and synthetic discriminant function for use at the nodes of binary and multidecision nets is discussed. Case studies are detailed for two-class and multiclass problems. Simulation results demonstrate the robustness of the processors to quantization of the filter coefficients and to noise.

  14. Graph-Theoretic Analysis of Monomethyl Phosphate Clustering in Ionic Solutions.

    PubMed

    Han, Kyungreem; Venable, Richard M; Bryant, Anne-Marie; Legacy, Christopher J; Shen, Rong; Li, Hui; Roux, Benoît; Gericke, Arne; Pastor, Richard W

    2018-02-01

    All-atom molecular dynamics simulations combined with graph-theoretic analysis reveal that clustering of monomethyl phosphate dianion (MMP 2- ) is strongly influenced by the types and combinations of cations in the aqueous solution. Although Ca 2+ promotes the formation of stable and large MMP 2- clusters, K + alone does not. Nonetheless, clusters are larger and their link lifetimes are longer in mixtures of K + and Ca 2+ . This "synergistic" effect depends sensitively on the Lennard-Jones interaction parameters between Ca 2+ and the phosphorus oxygen and correlates with the hydration of the clusters. The pronounced MMP 2- clustering effect of Ca 2+ in the presence of K + is confirmed by Fourier transform infrared spectroscopy. The characterization of the cation-dependent clustering of MMP 2- provides a starting point for understanding cation-dependent clustering of phosphoinositides in cell membranes.

  15. Translating expert system rules into Ada code with validation and verification

    NASA Technical Reports Server (NTRS)

    Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam

    1991-01-01

    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.

  16. Self-Organized Critical Behavior:. the Evolution of Frozen Spin Networks Model in Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Zhen; Zhu, Jian-Yang

    In quantum gravity, we study the evolution of a two-dimensional planar open frozen spin network, in which the color (i.e. the twice spin of an edge) labeling edge changes but the underlying graph remains fixed. The mainly considered evolution rule, the random edge model, is depending on choosing an edge randomly and changing the color of it by an even integer. Since the change of color generally violate the gauge invariance conditions imposed on the system, detailed propagation rule is needed and it can be defined in many ways. Here, we provided one new propagation rule, in which the involved even integer is not a constant one as in previous works, but changeable with certain probability. In random edge model, we do find the evolution of the system under the propagation rule exhibits power-law behavior, which is suggestive of the self-organized criticality (SOC), and it is the first time to verify the SOC behavior in such evolution model for the frozen spin network. Furthermore, the increase of the average color of the spin network in time can show the nature of inflation for the universe.

  17. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  18. A computational exploration of complementary learning mechanisms in the primate ventral visual pathway.

    PubMed

    Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M

    2016-02-01

    In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Data characteristic analysis of air conditioning load based on fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhang, Yanchi; Xie, Da

    2018-04-01

    With the development of economy and the improvement of people's living standards, air conditioning equipment is more and more popular. The influence of air conditioning load for power grid is becoming more and more serious. In this context it is necessary to study the characteristics of air conditioning load. This paper analyzes the data of air conditioning power consumption in an office building. The data is used for Fast Fourier Transform by data analysis software. Then a series of maps are drawn for the transformed data. The characteristics of each map were analyzed separately. The hidden rules of these data are mined from the angle of frequency domain. And these rules are hard to find in the time domain.

  20. Speed of evolution on graphs

    NASA Astrophysics Data System (ADS)

    Sui, Xiukai; Wu, Bin; Wang, Long

    2015-12-01

    The likelihood that a mutant fixates in the wild population, i.e., fixation probability, has been intensively studied in evolutionary game theory, where individuals' fitness is frequency dependent. However, it is of limited interest when it takes long to take over. Thus the speed of evolution becomes an important issue. In general, it is still unclear how fixation times are affected by the population structure, although the fixation times have already been addressed in the well-mixed populations. Here we theoretically address this issue by pair approximation and diffusion approximation on regular graphs. It is shown (i) that under neutral selection, both unconditional and conditional fixation time are shortened by increasing the number of neighbors; (ii) that under weak selection, for the simplified prisoner's dilemma game, if benefit-to-cost ratio exceeds the degree of the graph, then the unconditional fixation time of a single cooperator is slower than that in the neutral case; and (iii) that under weak selection, for the conditional fixation time, limited neighbor size dilutes the counterintuitive stochastic slowdown which was found in well-mixed populations. Interestingly, we find that all of our results can be interpreted as that in the well-mixed population with a transformed payoff matrix. This interpretation is also valid for both death-birth and birth-death processes on graphs. This interpretation bridges the fixation time in the structured population and that in the well-mixed population. Thus it opens the avenue to investigate the challenging fixation time in structured populations by the known results in well-mixed populations.

  1. Radiological health risks for exploratory class missions in space

    NASA Technical Reports Server (NTRS)

    Nachtwey, D. Stuart; Yang, Tracy Chui-Hsu

    1991-01-01

    The radiation risks to crewmembers on missions to the moon and Mars are studied. A graph is presented of the cross section as a function of linear energy transfer (LET) for cell inactivation and neoplastic cell transformation. Alternatives to conventional approaches to radiation protection using dose and Q are presented with attention given to a hybrid of the conventional system for particles with LET less than 100 keV/micron.

  2. Nurses and electronic health records in a Canadian hospital: examining the social organisation and programmed use of digitised nursing knowledge.

    PubMed

    Campbell, Marie L; Rankin, Janet M

    2017-03-01

    Institutional ethnography (IE) is used to examine transformations in a professional nurse's work associated with her engagement with a hospital's electronic health record (EHR) which is being updated to integrate professional caregiving and produce more efficient and effective health care. We review in the technical and scholarly literature the practices and promises of information technology and, especially of its applications in health care, finding useful the more critical and analytic perspectives. Among the latter, scholarship on the activities of economising is important to our inquiry into the actual activities that transform 'things' (in our case, nursing knowledge and action) into calculable information for objective and financially relevant decision-making. Beginning with an excerpt of observational data, we explicate observed nurse-patient interactions, discovering in them traces of institutional ruling relations that the nurse's activation of the EHR carries into the nursing setting. The EHR, we argue, materialises and generalises the ruling relations across institutionally located caregivers; its authorised information stabilises their knowing and acting, shaping health care towards a calculated effective and efficient form. Participating in the EHR's ruling practices, nurses adopt its ruling standpoint; a transformation that we conclude needs more careful analysis and debate. © 2016 Foundation for the Sociology of Health & Illness.

  3. PSG-EXPERT. An expert system for the diagnosis of sleep disorders.

    PubMed

    Fred, A; Filipe, J; Partinen, M; Paiva, T

    2000-01-01

    This paper describes PSG-EXPERT, an expert system in the domain of sleep disorders exploring polysomnographic data. The developed software tool is addressed from two points of view: (1)--as an integrated environment for the development of diagnosis-oriented expert systems; (2)--as an auxiliary diagnosis tool in the particular domain of sleep disorders. Developed over a Windows platform, this software tool extends one of the most popular shells--CLIPS (C Language Integrated Production System) with the following features: backward chaining engine; graph-based explanation facilities; knowledge editor including a fuzzy fact editor and a rules editor, with facts-rules integrity checking; belief revision mechanism; built-in case generator and validation module. It therefore provides graphical support for knowledge acquisition, edition, explanation and validation. From an application domain point of view, PSG-Expert is an auxiliary diagnosis system for sleep disorders based on polysomnographic data, that aims at assisting the medical expert in his diagnosis task by providing automatic analysis of polysomnographic data, summarising the results of this analysis in terms of a report of major findings and possible diagnosis consistent with the polysomnographic data. Sleep disorders classification follows the International Classification of Sleep Disorders. Major features of the system include: browsing on patients data records; structured navigation on Sleep Disorders descriptions according to ASDA definitions; internet links to related pages; diagnosis consistent with polysomnographic data; graphical user-interface including graph-based explanatory facilities; uncertainty modelling and belief revision; production of reports; connection to remote databases.

  4. Emergent 1d Ising Behavior in AN Elementary Cellular Automaton Model

    NASA Astrophysics Data System (ADS)

    Kassebaum, Paul G.; Iannacchione, Germano S.

    The fundamental nature of an evolving one-dimensional (1D) Ising model is investigated with an elementary cellular automaton (CA) simulation. The emergent CA simulation employs an ensemble of cells in one spatial dimension, each cell capable of two microstates interacting with simple nearest-neighbor rules and incorporating an external field. The behavior of the CA model provides insight into the dynamics of coupled two-state systems not expressible by exact analytical solutions. For instance, state progression graphs show the causal dynamics of a system through time in relation to the system's entropy. Unique graphical analysis techniques are introduced through difference patterns, diffusion patterns, and state progression graphs of the 1D ensemble visualizing the evolution. All analyses are consistent with the known behavior of the 1D Ising system. The CA simulation and new pattern recognition techniques are scalable (in both dimension, complexity, and size) and have many potential applications such as complex design of materials, control of agent systems, and evolutionary mechanism design.

  5. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    NASA Astrophysics Data System (ADS)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  6. Deductibles in health insurance

    NASA Astrophysics Data System (ADS)

    Dimitriyadis, I.; Öney, Ü. N.

    2009-11-01

    This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.

  7. Phases, phase equilibria, and phase rules in low-dimensional systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, T., E-mail: timfrol@berkeley.edu; Mishin, Y., E-mail: ymishin@gmu.edu

    2015-07-28

    We present a unified approach to thermodynamic description of one, two, and three dimensional phases and phase transformations among them. The approach is based on a rigorous definition of a phase applicable to thermodynamic systems of any dimensionality. Within this approach, the same thermodynamic formalism can be applied for the description of phase transformations in bulk systems, interfaces, and line defects separating interface phases. For both lines and interfaces, we rigorously derive an adsorption equation, the phase coexistence equations, and other thermodynamic relations expressed in terms of generalized line and interface excess quantities. As a generalization of the Gibbs phasemore » rule for bulk phases, we derive phase rules for lines and interfaces and predict the maximum number of phases than may coexist in systems of the respective dimensionality.« less

  8. Engineering rules for evaluating the efficiency of multiplexing traffic streams

    NASA Astrophysics Data System (ADS)

    Klincewicz, John G.

    2004-09-01

    It is common, either for a telecommunications service provider or for a corporate enterprise, to have multiple data networks. For example, both an IP network and an ATM or Frame Relay network could be in operation to serve different applications. This can result in parallel transport links between the same two locations, each carrying data traffic under a different protocol. In this paper, we consider some practical engineering rules, for particular situations, to evaluate whether or not it is advantageous to combine these parallel traffic streams onto a single transport link. Combining the streams requires additional overhead (a so-called "cell tax" ) but, in at least some situations, can result in more efficient use of modular transport capacity. Simple graphs can be used to summarize the analysis. Some interesting, and perhaps unexpected, observations can be made.

  9. A ECG Signal Gathering and Displaying System Based on AVR

    NASA Astrophysics Data System (ADS)

    Ning, Li; Ruilan, Zhang; Jian, Liu; Xiaochen, Wang; Shuying, Chen; Zhuolin, Lang

    2017-12-01

    This article introduces a kind of system which is based on the AVR to acquire the data of ECG. Such system using the A/D function of ATmega8 chip and the lattice graph LCD to design ECG heart acquisition satisfies the demands above. This design gives a composition of hardware and programming of software about the system in detail which has mainly realized the real-time gathering, the amplifier, the filter, the A/D transformation and the LCD display. Since the AVR includes A/D transformation function and support embedded C language programming, it reduces the peripheral circuit, further more it also decreases the time to design and debug this system.

  10. 76 FR 67400 - Capital Project Management

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...-0030] RIN 2132-AA92 Capital Project Management AGENCY: Federal Transit Administration (FTA), DOT... extending the comment period on its proposed rule for Capital Project Management to December 2, 2011, to...) proposing to transform the current FTA rule for project management oversight into a discrete set of...

  11. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  12. Compiling standardized information from clinical practice: using content analysis and ICF Linking Rules in a goal-oriented youth rehabilitation program.

    PubMed

    Lustenberger, Nadia A; Prodinger, Birgit; Dorjbal, Delgerjargal; Rubinelli, Sara; Schmitt, Klaus; Scheel-Sailer, Anke

    2017-09-23

    To illustrate how routinely written narrative admission and discharge reports of a rehabilitation program for eight youths with chronic neurological health conditions can be transformed to the International Classification of Functioning, Disability and Health. First, a qualitative content analysis was conducted by building meaningful units with text segments assigned of the reports to the five elements of the Rehab-Cycle ® : goal; assessment; assignment; intervention; evaluation. Second, the meaningful units were then linked to the ICF using the refined ICF Linking Rules. With the first step of transformation, the emphasis of the narrative reports changed to a process oriented interdisciplinary layout, revealing three thematic blocks of goals: mobility, self-care, mental, and social functions. The linked 95 unique ICF codes could be grouped in clinically meaningful goal-centered ICF codes. Between the two independent linkers, the agreement rate was improved after complementing the rules with additional agreements. The ICF Linking Rules can be used to compile standardized health information from narrative reports if prior structured. The process requires time and expertise. To implement the ICF into common practice, the findings provide the starting point for reporting rehabilitation that builds upon existing practice and adheres to international standards. Implications for Rehabilitation This study provides evidence that routinely collected health information from rehabilitation practice can be transformed to the International Classification of Functioning, Disability and Health by using the "ICF Linking Rules", however, this requires time and expertise. The Rehab-Cycle ® , including assessments, assignments, goal setting, interventions and goal evaluation, serves as feasible framework for structuring this rehabilitation program and ensures that the complexity of local practice is appropriately reflected. The refined "ICF Linking Rules" lead to a standardized transformation process of narrative text and thus a higher quality with increased transparency. As a next step, the resulting format of goal codes supplemented by goal-clarifying codes could be validated to strengthen the implementation of the International Classification of Functioning, Disability and Health into rehabilitation routine by respecting the variety of clinical practice.

  13. A new approach to the method of source-sink potentials for molecular conduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickup, Barry T., E-mail: B.T.Pickup@sheffield.ac.uk, E-mail: P.W.Fowler@sheffield.ac.uk; Fowler, Patrick W., E-mail: B.T.Pickup@sheffield.ac.uk, E-mail: P.W.Fowler@sheffield.ac.uk; Borg, Martha

    2015-11-21

    We re-derive the tight-binding source-sink potential (SSP) equations for ballistic conduction through conjugated molecular structures in a form that avoids singularities. This enables derivation of new results for families of molecular devices in terms of eigenvectors and eigenvalues of the adjacency matrix of the molecular graph. In particular, we define the transmission of electrons through individual molecular orbitals (MO) and through MO shells. We make explicit the behaviour of the total current and individual MO and shell currents at molecular eigenvalues. A rich variety of behaviour is found. A SSP device has specific insulation or conduction at an eigenvalue ofmore » the molecular graph (a root of the characteristic polynomial) according to the multiplicities of that value in the spectra of four defined device polynomials. Conduction near eigenvalues is dominated by the transmission curves of nearby shells. A shell may be inert or active. An inert shell does not conduct at any energy, not even at its own eigenvalue. Conduction may occur at the eigenvalue of an inert shell, but is then carried entirely by other shells. If a shell is active, it carries all conduction at its own eigenvalue. For bipartite molecular graphs (alternant molecules), orbital conduction properties are governed by a pairing theorem. Inertness of shells for families such as chains and rings is predicted by selection rules based on node counting and degeneracy.« less

  14. Performance of children with developmental dyslexia on high and low topological entropy artificial grammar learning task.

    PubMed

    Katan, Pesia; Kahta, Shani; Sasson, Ayelet; Schiff, Rachel

    2017-07-01

    Graph complexity as measured by topological entropy has been previously shown to affect performance on artificial grammar learning tasks among typically developing children. The aim of this study was to examine the effect of graph complexity on implicit sequential learning among children with developmental dyslexia. Our goal was to determine whether children's performance depends on the complexity level of the grammar system learned. We conducted two artificial grammar learning experiments that compared performance of children with developmental dyslexia with that of age- and reading level-matched controls. Experiment 1 was a high topological entropy artificial grammar learning task that aimed to establish implicit learning phenomena in children with developmental dyslexia using previously published experimental conditions. Experiment 2 is a lower topological entropy variant of that task. Results indicated that given a high topological entropy grammar system, children with developmental dyslexia who were similar to the reading age-matched control group had substantial difficulty in performing the task as compared to typically developing children, who exhibited intact implicit learning of the grammar. On the other hand, when tested on a lower topological entropy grammar system, all groups performed above chance level, indicating that children with developmental dyslexia were able to identify rules from a given grammar system. The results reinforced the significance of graph complexity when experimenting with artificial grammar learning tasks, particularly with dyslexic participants.

  15. Fuzzy α-minimum spanning tree problem: definition and solutions

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan

    2016-04-01

    In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.

  16. Computing the shape of brain networks using graph filtration and Gromov-Hausdorff metric.

    PubMed

    Lee, Hyekyoung; Chung, Moo K; Kang, Hyejin; Kim, Boong-Nyun; Lee, Dong Soo

    2011-01-01

    The difference between networks has been often assessed by the difference of global topological measures such as the clustering coefficient, degree distribution and modularity. In this paper, we introduce a new framework for measuring the network difference using the Gromov-Hausdorff (GH) distance, which is often used in shape analysis. In order to apply the GH distance, we define the shape of the brain network by piecing together the patches of locally connected nearest neighbors using the graph filtration. The shape of the network is then transformed to an algebraic form called the single linkage matrix. The single linkage matrix is subsequently used in measuring network differences using the GH distance. As an illustration, we apply the proposed framework to compare the FDG-PET based functional brain networks out of 24 attention deficit hyperactivity disorder (ADHD) children, 26 autism spectrum disorder (ASD) children and 11 pediatric control subjects.

  17. Fitting of Hadron Mass Spectra and Contributions to Perturbation Theory of Conformal Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Luna Acosta, German Aurelio

    The masses of observed hadrons are fitted according to the kinematic predictions of Conformal Relativity. The hypothesis gives a remarkably good fit. The isospin SU(2) gauge invariant Lagrangian L(,(pi)NN)(x,(lamda)) is used in the calculation of d(sigma)/d(OMEGA) to 2nd-order Feynman graphs for simplified models of (pi)N(--->)(pi)N. The resulting infinite mass sums over the nucleon (Conformal) families are done via the Generalized-Sommerfeld-Watson Transform Theorem. Even though the models are too simple to be realistic, they indicate that if (DELTA)-internal lines were to be included, 2nd-order Feynman graphs may reproduce the experimental data qualitatively. The energy -dependence of the propagator and couplings in Conformal QFT is different from that of ordinary QFT. Suggestions for further work are made in the areas of ultra-violet divergences and OPEC calculations.

  18. Cross-layer shared protection strategy towards data plane in software defined optical networks

    NASA Astrophysics Data System (ADS)

    Xiong, Yu; Li, Zhiqiang; Zhou, Bin; Dong, Xiancun

    2018-04-01

    In order to ensure reliable data transmission on the data plane and minimize resource consumption, a novel protection strategy towards data plane is proposed in software defined optical networks (SDON). Firstly, we establish a SDON architecture with hierarchical structure of data plane, which divides the data plane into four layers for getting fine-grained bandwidth resource. Then, we design the cross-layer routing and resource allocation based on this network architecture. Through jointly considering the bandwidth resource on all the layers, the SDN controller could allocate bandwidth resource to working path and backup path in an economical manner. Next, we construct auxiliary graphs and transform the shared protection problem into the graph vertex coloring problem. Therefore, the resource consumption on backup paths can be reduced further. The simulation results demonstrate that the proposed protection strategy can achieve lower protection overhead and higher resource utilization ratio.

  19. Alteration of a motor learning rule under mirror-reversal transformation does not depend on the amplitude of visual error.

    PubMed

    Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi

    2015-05-01

    Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  20. Mexican Civil-Military Relations: Stability and Strength in an Uncertain Environment

    DTIC Science & Technology

    2011-10-28

    colonial rule ended in 1821, Mexico was an unstable yet independent state until General Porfirio Diaz established a dictatorship in 1876, running the...ruthlessness that he ruled for 34 years. The Mexican people needed stability, and Diaz’ approach to governance offered economic reforms...Factbook (Washington: CSIS Americas Program, 1999), 3. transformed the ruling party’s structure6 by establishing four “corporate sectors” within

  1. ReactPRED: a tool to predict and analyze biochemical reactions.

    PubMed

    Sivakumar, Tadi Venkata; Giri, Varun; Park, Jin Hwan; Kim, Tae Yong; Bhaduri, Anirban

    2016-11-15

    Biochemical pathways engineering is often used to synthesize or degrade target chemicals. In silico screening of the biochemical transformation space allows predicting feasible reactions, constituting these pathways. Current enabling tools are customized to predict reactions based on pre-defined biochemical transformations or reaction rule sets. Reaction rule sets are usually curated manually and tailored to specific applications. They are not exhaustive. In addition, current systems are incapable of regulating and refining data with an aim to tune specificity and sensitivity. A robust and flexible tool that allows automated reaction rule set creation along with regulated pathway prediction and analyses is a need. ReactPRED aims to address the same. ReactPRED is an open source flexible and customizable tool enabling users to predict biochemical reactions and pathways. The tool allows automated reaction rule creation from a user defined reaction set. Additionally, reaction rule degree and rule tolerance features allow refinement of predicted data. It is available as a flexible graphical user interface and a console application. ReactPRED is available at: https://sourceforge.net/projects/reactpred/ CONTACT: anirban.b@samsung.com or ty76.kim@samsung.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Synthesis for Structure Rewriting Systems

    NASA Astrophysics Data System (ADS)

    Kaiser, Łukasz

    The description of a single state of a modelled system is often complex in practice, but few procedures for synthesis address this problem in depth. We study systems in which a state is described by an arbitrary finite structure, and changes of the state are represented by structure rewriting rules, a generalisation of term and graph rewriting. Both the environment and the controller are allowed to change the structure in this way, and the question we ask is how a strategy for the controller that ensures a given property can be synthesised.

  3. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  4. Software For Monitoring A Computer Network

    NASA Technical Reports Server (NTRS)

    Lee, Young H.

    1992-01-01

    SNMAT is rule-based expert-system computer program designed to assist personnel in monitoring status of computer network and identifying defective computers, workstations, and other components of network. Also assists in training network operators. Network for SNMAT located at Space Flight Operations Center (SFOC) at NASA's Jet Propulsion Laboratory. Intended to serve as data-reduction system providing windows, menus, and graphs, enabling users to focus on relevant information. SNMAT expected to be adaptable to other computer networks; for example in management of repair, maintenance, and security, or in administration of planning systems, billing systems, or archives.

  5. Implicit transfer of spatial structure in visuomotor sequence learning.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2014-11-01

    Implicit learning and transfer in sequence learning are essential in daily life. Here, we investigated the implicit transfer of visuomotor sequences following a spatial transformation. In the two experiments, participants used trial and error to learn a sequence consisting of several button presses, known as the m×n task (Hikosaka et al., 1995). After this learning session, participants learned another sequence in which the button configuration was spatially transformed in one of the following ways: mirrored, rotated, and random arrangement. Our results showed that even when participants were unaware of the transformation rules, accuracy of transfer session in the mirrored and rotated groups was higher than that in the random group (i.e., implicit transfer occurred). Both those who noticed the transformation rules and those who did not (i.e., explicit and implicit transfer instances, respectively) showed faster performance in the mirrored sequences than in the rotated sequences. Taken together, the present results suggest that people can use their implicit visuomotor knowledge to spatially transform sequences and that implicit transfers are modulated by a transformation cost, similar to that in explicit transfer. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. CONSTRAINTS ON VARIABLES IN SYNTAX.

    ERIC Educational Resources Information Center

    ROSS, JOHN ROBERT

    IN ATTEMPTING TO DEFINE "SYNTACTIC VARIABLE," THE AUTHOR BASES HIS DISCUSSION ON THE ASSUMPTION THAT SYNTACTIC FACTS ARE A COLLECTION OF TWO TYPES OF RULES--CONTEXT-FREE PHRASE STRUCTURE RULES (GENERATING UNDERLYING OR DEEP PHRASE MARKERS) AND GRAMMATICAL TRANSFORMATIONS, WHICH MAP UNDERLYING PHRASE MARKERS ONTO SUPERFICIAL (OR SURFACE) PHRASE…

  7. Presenting Germany's drug pricing rule as a cost-per-QALY rule.

    PubMed

    Gandjour, Afschin

    2012-06-01

    In Germany, the Institute for Quality and Efficiency in Health Care (IQWiG) makes recommendations for ceiling prices of drugs based on an evaluation of the relationship between costs and effectiveness. To set ceiling prices, IQWiG uses the following decision rule: the incremental cost-effectiveness ratio of a new drug compared with the next effective intervention should not be higher than that of the next effective intervention compared to its comparator. The purpose of this paper is to show that IQWiG's decision rule can be presented as a cost-per-QALY rule by using equity-weighted QALYs. This transformation shows where both rules share commonalities. Furthermore, it makes the underlying ethical implications of IQWiG's decision rule transparent and open to debate.

  8. Analytic solution for American strangle options using Laplace-Carson transforms

    NASA Astrophysics Data System (ADS)

    Kang, Myungjoo; Jeon, Junkee; Han, Heejae; Lee, Somin

    2017-06-01

    A strangle has been important strategy for options when the trader believes there will be a large movement in the underlying asset but are uncertain of which way the movement will be. In this paper, we derive analytic formula for the price of American strangle options. American strangle options can be mathematically formulated into the free boundary problems involving two early exercise boundaries. By using Laplace-Carson Transform(LCT), we can derive the nonlinear system of equations satisfied by the transformed value of two free boundaries. We then solve this nonlinear system using Newton's method and finally get the free boundaries and option values using numerical Laplace inversion techniques. We also derive the Greeks for the American strangle options as well as the value of perpetual American strangle options. Furthermore, we present various graphs for the free boundaries and option values according to the change of parameters.

  9. Homotopy perturbation method with Laplace Transform (LT-HPM) for solving Lane-Emden type differential equations (LETDEs).

    PubMed

    Tripathi, Rajnee; Mishra, Hradyesh Kumar

    2016-01-01

    In this communication, we describe the Homotopy Perturbation Method with Laplace Transform (LT-HPM), which is used to solve the Lane-Emden type differential equations. It's very difficult to solve numerically the Lane-Emden types of the differential equation. Here we implemented this method for two linear homogeneous, two linear nonhomogeneous, and four nonlinear homogeneous Lane-Emden type differential equations and use their appropriate comparisons with exact solutions. In the current study, some examples are better than other existing methods with their nearer results in the form of power series. The Laplace transform used to accelerate the convergence of power series and the results are shown in the tables and graphs which have good agreement with the other existing method in the literature. The results show that LT-HPM is very effective and easy to implement.

  10. Pseudo-extravasation rate constant of dynamic susceptibility contrast-MRI determined from pharmacokinetic first principles.

    PubMed

    Li, Xin; Varallyay, Csanad G; Gahramanov, Seymur; Fu, Rongwei; Rooney, William D; Neuwelt, Edward A

    2017-11-01

    Dynamic susceptibility contrast-magnetic resonance imaging (DSC-MRI) is widely used to obtain informative perfusion imaging biomarkers, such as the relative cerebral blood volume (rCBV). The related post-processing software packages for DSC-MRI are available from major MRI instrument manufacturers and third-party vendors. One unique aspect of DSC-MRI with low-molecular-weight gadolinium (Gd)-based contrast reagent (CR) is that CR molecules leak into the interstitium space and therefore confound the DSC signal detected. Several approaches to correct this leakage effect have been proposed throughout the years. Amongst the most popular is the Boxerman-Schmainda-Weisskoff (BSW) K 2 leakage correction approach, in which the K 2 pseudo-first-order rate constant quantifies the leakage. In this work, we propose a new method for the BSW leakage correction approach. Based on the pharmacokinetic interpretation of the data, the commonly adopted R 2 * expression accounting for contributions from both intravascular and extravasating CR components is transformed using a method mathematically similar to Gjedde-Patlak linearization. Then, the leakage rate constant (K L ) can be determined as the slope of the linear portion of a plot of the transformed data. Using the DSC data of high-molecular-weight (~750 kDa), iron-based, intravascular Ferumoxytol (FeO), the pharmacokinetic interpretation of the new paradigm is empirically validated. The primary objective of this work is to empirically demonstrate that a linear portion often exists in the graph of the transformed data. This linear portion provides a clear definition of the Gd CR pseudo-leakage rate constant, which equals the slope derived from the linear segment. A secondary objective is to demonstrate that transformed points from the initial transient period during the CR wash-in often deviate from the linear trend of the linearized graph. The inclusion of these points will have a negative impact on the accuracy of the leakage rate constant, and even make it time dependent. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A manifold learning approach to target detection in high-resolution hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.

    Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.

  12. Willpower and Personal Rules.

    ERIC Educational Resources Information Center

    Benabou, Roland; Tirole, Jean

    2004-01-01

    We develop a theory of internal commitments or "personal rules" based on self-reputation over one's willpower, which transforms lapses into precedents that undermine future self-restraint. The foundation for this mechanism is the imperfect recall of past motives and feelings, leading people to draw inferences from their past actions. The degree of…

  13. Notions of "Generation" in Rhetorical Studies.

    ERIC Educational Resources Information Center

    Young, Richard

    A study of the meanings of "generation," a popular term in current rhetorical jargon, reveals important developments in the art and theory of rhetoric. As now used, it refers without clear distinction to rule-governed, heuristic, and trial-and-error procedures. The rule-governed procedures of transformation grammar are being employed to…

  14. Computer-Based Linguistic Analysis.

    ERIC Educational Resources Information Center

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  15. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  16. From Feynman rules to conserved quantum numbers, I

    NASA Astrophysics Data System (ADS)

    Nogueira, P.

    2017-05-01

    In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.

  17. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  18. Expert systems for automated maintenance of a Mars oxygen production system

    NASA Astrophysics Data System (ADS)

    Huang, Jen-Kuang; Ho, Ming-Tsang; Ash, Robert L.

    1992-08-01

    Application of expert system concepts to a breadboard Mars oxygen processor unit have been studied and tested. The research was directed toward developing the methodology required to enable autonomous operation and control of these simple chemical processors at Mars. Failure detection and isolation was the key area of concern, and schemes using forward chaining, backward chaining, knowledge-based expert systems, and rule-based expert systems were examined. Tests and simulations were conducted that investigated self-health checkout, emergency shutdown, and fault detection, in addition to normal control activities. A dynamic system model was developed using the Bond-Graph technique. The dynamic model agreed well with tests involving sudden reductions in throughput. However, nonlinear effects were observed during tests that incorporated step function increases in flow variables. Computer simulations and experiments have demonstrated the feasibility of expert systems utilizing rule-based diagnosis and decision-making algorithms.

  19. An enhancement of ROC curves made them clinically relevant for diagnostic-test comparison and optimal-threshold determination.

    PubMed

    Subtil, Fabien; Rabilloud, Muriel

    2015-07-01

    The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman M.; Zaremba, Marek B.

    2003-03-01

    Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.

  1. Rule-based graph theory to enable exploration of the space system architecture design space

    NASA Astrophysics Data System (ADS)

    Arney, Dale Curtis

    The primary goal of this research is to improve upon system architecture modeling in order to enable the exploration of design space options. A system architecture is the description of the functional and physical allocation of elements and the relationships, interactions, and interfaces between those elements necessary to satisfy a set of constraints and requirements. The functional allocation defines the functions that each system (element) performs, and the physical allocation defines the systems required to meet those functions. Trading the functionality between systems leads to the architecture-level design space that is available to the system architect. The research presents a methodology that enables the modeling of complex space system architectures using a mathematical framework. To accomplish the goal of improved architecture modeling, the framework meets five goals: technical credibility, adaptability, flexibility, intuitiveness, and exhaustiveness. The framework is technically credible, in that it produces an accurate and complete representation of the system architecture under consideration. The framework is adaptable, in that it provides the ability to create user-specified locations, steady states, and functions. The framework is flexible, in that it allows the user to model system architectures to multiple destinations without changing the underlying framework. The framework is intuitive for user input while still creating a comprehensive mathematical representation that maintains the necessary information to completely model complex system architectures. Finally, the framework is exhaustive, in that it provides the ability to explore the entire system architecture design space. After an extensive search of the literature, graph theory presents a valuable mechanism for representing the flow of information or vehicles within a simple mathematical framework. Graph theory has been used in developing mathematical models of many transportation and network flow problems in the past, where nodes represent physical locations and edges represent the means by which information or vehicles travel between those locations. In space system architecting, expressing the physical locations (low-Earth orbit, low-lunar orbit, etc.) and steady states (interplanetary trajectory) as nodes and the different means of moving between the nodes (propulsive maneuvers, etc.) as edges formulates a mathematical representation of this design space. The selection of a given system architecture using graph theory entails defining the paths that the systems take through the space system architecture graph. A path through the graph is defined as a list of edges that are traversed, which in turn defines functions performed by the system. A structure to compactly represent this information is a matrix, called the system map, in which the column indices are associated with the systems that exist and row indices are associated with the edges, or functions, to which each system has access. Several contributions have been added to the state of the art in space system architecture analysis. The framework adds the capability to rapidly explore the design space without the need to limit trade options or the need for user interaction during the exploration process. The unique mathematical representation of a system architecture, through the use of the adjacency, incidence, and system map matrices, enables automated design space exploration using stochastic optimization processes. The innovative rule-based graph traversal algorithm ensures functional feasibility of each system architecture that is analyzed, and the automatic generation of the system hierarchy eliminates the need for the user to manually determine the relationships between systems during or before the design space exploration process. Finally, the rapid evaluation of system architectures for various mission types enables analysis of the system architecture design space for multiple destinations within an evolutionary exploration program. (Abstract shortened by UMI.).

  2. Fully Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images With Diabetic Macular Edema Using Neutrosophic Sets and Graph Algorithms.

    PubMed

    Rashno, Abdolreza; Koozekanani, Dara D; Drayna, Paul M; Nazari, Behzad; Sadri, Saeed; Rabbani, Hossein; Parhi, Keshab K

    2018-05-01

    This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.

  3. Brain Network Analysis: Separating Cost from Topology Using Cost-Integration

    PubMed Central

    Ginestet, Cedric E.; Nichols, Thomas E.; Bullmore, Ed T.; Simmons, Andrew

    2011-01-01

    A statistically principled way of conducting brain network analysis is still lacking. Comparison of different populations of brain networks is hard because topology is inherently dependent on wiring cost, where cost is defined as the number of edges in an unweighted graph. In this paper, we evaluate the benefits and limitations associated with using cost-integrated topological metrics. Our focus is on comparing populations of weighted undirected graphs that differ in mean association weight, using global efficiency. Our key result shows that integrating over cost is equivalent to controlling for any monotonic transformation of the weight set of a weighted graph. That is, when integrating over cost, we eliminate the differences in topology that may be due to a monotonic transformation of the weight set. Our result holds for any unweighted topological measure, and for any choice of distribution over cost levels. Cost-integration is therefore helpful in disentangling differences in cost from differences in topology. By contrast, we show that the use of the weighted version of a topological metric is generally not a valid approach to this problem. Indeed, we prove that, under weak conditions, the use of the weighted version of global efficiency is equivalent to simply comparing weighted costs. Thus, we recommend the reporting of (i) differences in weighted costs and (ii) differences in cost-integrated topological measures with respect to different distributions over the cost domain. We demonstrate the application of these techniques in a re-analysis of an fMRI working memory task. We also provide a Monte Carlo method for approximating cost-integrated topological measures. Finally, we discuss the limitations of integrating topology over cost, which may pose problems when some weights are zero, when multiplicities exist in the ranks of the weights, and when one expects subtle cost-dependent topological differences, which could be masked by cost-integration. PMID:21829437

  4. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  5. Noninvasive assessment of extracellular and intracellular dehydration in healthy humans using the resistance-reactance-score graph method.

    PubMed

    Heavens, Kristen R; Charkoudian, Nisha; O'Brien, Catherine; Kenefick, Robert W; Cheuvront, Samuel N

    2016-03-01

    Few dehydration assessment measures provide accurate information; most are based on reference change values and very few are diagnostically accurate from a single observation or measure. Bioelectrical impedance may lack the precision to detect common forms of dehydration in healthy individuals. Limitations in bioimpedance may be addressed by a unique resistance-reactance (RXc)-score graph method, which transforms vector components into z scores for use with any impedance analyzer in any population. We tested whether the RXc-score graph method provides accurate single or serial assessments of dehydration when compared with gold-standard measures of total body water by using stable isotope dilution (deuterium oxide) combined with body-weight changes. We retrospectively analyzed data from a previous study in which 9 healthy young men participated in 3 trials: euhydration (EUH), extracellular dehydration (ED; via a diuretic), and intracellular dehydration (ID; via exercise in the heat). Participants lost 4-5% of their body weight during the dehydration trials; volume loss was similar between trials (ID compared with ED group: 3.5 ± 0.8 compared with 3.0 ± 0.6 L; P > 0.05). Despite significant losses of body water, most RXc vector scores for ED and ID groups were classified as "normal" (within the 75% population tolerance ellipse). However, directional displacement of vectors was consistent with loss of volume in both ED and ID conditions compared with the EUH condition and tended to be longer in ED than in ID conditions (P = 0.054). We conclude that, whereas individual RXc-score graph values do not provide accurate detection of dehydration from single measurements, directional changes in vector values from serial measurements are consistent with fluid loss for both ED and ID conditions. The RXc-score graph method may therefore alert clinicians to changes in hydration state, which may bolster the interpretation of other recognized change measures of hydration. © 2016 American Society for Nutrition.

  6. Spatio-Semantic Comparison of Large 3d City Models in Citygml Using a Graph Database

    NASA Astrophysics Data System (ADS)

    Nguyen, S. H.; Yao, Z.; Kolbe, T. H.

    2017-10-01

    A city may have multiple CityGML documents recorded at different times or surveyed by different users. To analyse the city's evolution over a given period of time, as well as to update or edit the city model without negating modifications made by other users, it is of utmost importance to first compare, detect and locate spatio-semantic changes between CityGML datasets. This is however difficult due to the fact that CityGML elements belong to a complex hierarchical structure containing multi-level deep associations, which can basically be considered as a graph. Moreover, CityGML allows multiple syntactic ways to define an object leading to syntactic ambiguities in the exchange format. Furthermore, CityGML is capable of including not only 3D urban objects' graphical appearances but also their semantic properties. Since to date, no known algorithm is capable of detecting spatio-semantic changes in CityGML documents, a frequent approach is to replace the older models completely with the newer ones, which not only costs computational resources, but also loses track of collaborative and chronological changes. Thus, this research proposes an approach capable of comparing two arbitrarily large-sized CityGML documents on both semantic and geometric level. Detected deviations are then attached to their respective sources and can easily be retrieved on demand. As a result, updating a 3D city model using this approach is much more efficient as only real changes are committed. To achieve this, the research employs a graph database as the main data structure for storing and processing CityGML datasets in three major steps: mapping, matching and updating. The mapping process transforms input CityGML documents into respective graph representations. The matching process compares these graphs and attaches edit operations on the fly. Found changes can then be executed using the Web Feature Service (WFS), the standard interface for updating geographical features across the web.

  7. Distributed rewiring model for complex networking: The effect of local rewiring rules on final structural properties.

    PubMed

    López Chavira, Magali Alexander; Marcelín-Jiménez, Ricardo

    2017-01-01

    The study of complex networks has become an important subject over the last decades. It has been shown that these structures have special features, such as their diameter, or their average path length, which in turn are the explanation of some functional properties in a system such as its fault tolerance, its fragility before attacks, or the ability to support routing procedures. In the present work, we study some of the forces that help a network to evolve to the point where structural properties are settled. Although our work is mainly focused on the possibility of applying our ideas to Information and Communication Technologies systems, we consider that our results may contribute to understanding different scenarios where complex networks have become an important modeling tool. Using a discrete event simulator, we get each node to discover the shortcuts that may connect it with regions away from its local environment. Based on this partial knowledge, each node can rewire some of its links, which allows modifying the topology of the entire underlying graph to achieve new structural properties. We proposed a distributed rewiring model that creates networks with features similar to those found in complex networks. Although each node acts in a distributed way and seeking to reduce only the trajectories of its packets, we observed a decrease of diameter and an increase in clustering coefficient in the global structure compared to the initial graph. Furthermore, we can find different final structures depending on slight changes in the local rewiring rules.

  8. Computing Fourier integral operators with caustics

    NASA Astrophysics Data System (ADS)

    Caday, Peter

    2016-12-01

    Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.

  9. YouTube, Critical Pedagogy, and Media Activism

    ERIC Educational Resources Information Center

    Kellner, Douglas; Kim, Gooyong

    2010-01-01

    Critical pedagogy believes education to be a form of cultural politics that is fundamental to social transformation aiming to cultivate human agency and transformative activity. The explosion of information and communication technologies (ICTs) has provided ordinary people with unprecedented opportunities to take on the ruling educational power…

  10. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music.

    PubMed

    Giraldo, Sergio I; Ramirez, Rafael

    2016-01-01

    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.

  11. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music

    PubMed Central

    Giraldo, Sergio I.; Ramirez, Rafael

    2016-01-01

    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. PMID:28066290

  12. Angular momentum conservation law in light-front quantum field theory

    DOE PAGES

    Chiu, Kelly Yu-Ju; Brodsky, Stanley J.

    2017-03-31

    We prove the Lorentz invariance of the angular momentum conservation law and the helicity sum rule for relativistic composite systems in the light-front formulation. We explicitly show that j 3, the z -component of the angular momentum remains unchanged under Lorentz transformations generated by the light-front kinematical boost operators. The invariance of j 3 under Lorentz transformations is a feature unique to the front form. Applying the Lorentz invariance of the angular quantum number in the front form, we obtain a selection rule for the orbital angular momentum which can be used to eliminate certain interaction vertices in QED andmore » QCD. We also generalize the selection rule to any renormalizable theory and show that there exists an upper bound on the change of orbital angular momentum in scattering processes at any fixed order in perturbation theory.« less

  13. Features of development process displacement of earth’s surface when dredging coal in Eastern Donbas

    NASA Astrophysics Data System (ADS)

    Posylniy, Yu V.; Versilov, S. O.; Shurygin, D. N.; Kalinchenko, V. M.

    2017-10-01

    The results of studies of the process of the earth’s surface displacement due to the influence of the adjacent longwalls are presented. It is established that the actual distributions of soil subsidence in the fall and revolt of the reservoir with the same boundary settlement processes differ both from each other and by the distribution of subsidence, recommended by the rules of structures protection. The application of the new boundary criteria - the relative subsidence of 0.03 - allows one to go from two distributions to one distribution, which is also different from the sedimentation distribution of protection rules. The use of a new geometrical element - a virtual point of the mould - allows one to transform the actual distribution of subsidence in the model distribution of rules of constructions protection. When transforming the curves of subsidence, the boundary points vary and, consequently, the boundary corners do.

  14. Angular momentum conservation law in light-front quantum field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, Kelly Yu-Ju; Brodsky, Stanley J.

    We prove the Lorentz invariance of the angular momentum conservation law and the helicity sum rule for relativistic composite systems in the light-front formulation. We explicitly show that j 3, the z -component of the angular momentum remains unchanged under Lorentz transformations generated by the light-front kinematical boost operators. The invariance of j 3 under Lorentz transformations is a feature unique to the front form. Applying the Lorentz invariance of the angular quantum number in the front form, we obtain a selection rule for the orbital angular momentum which can be used to eliminate certain interaction vertices in QED andmore » QCD. We also generalize the selection rule to any renormalizable theory and show that there exists an upper bound on the change of orbital angular momentum in scattering processes at any fixed order in perturbation theory.« less

  15. Angular momentum conservation law in light-front quantum field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, Kelly Yu-Ju; Brodsky, Stanley J.

    We prove the Lorentz invariance of the angular momentum conservation law and the helicity sum rule for relativistic composite systems in the light-front formulation. We explicitly show that j 3 , the z -component of the angular momentum remains unchanged under Lorentz transformations generated by the light-front kinematical boost operators. The invariance of j 3 under Lorentz transformations is a feature unique to the front form. Applying the Lorentz invariance of the angular quantum number in the front form, we obtain a selection rule for the orbital angular momentum which can be used to eliminate certain interaction vertices in QEDmore » and QCD. We also generalize the selection rule to any renormalizable theory and show that there exists an upper bound on the change of orbital angular momentum in scattering processes at any fixed order in perturbation theory.« less

  16. Extended canonical field theory of matter and space-time

    NASA Astrophysics Data System (ADS)

    Struckmeier, J.; Vasak, D.; matter, H. Stoecker Field theory of; space-time

    2015-11-01

    Any physical theory that follows from an action principle should be invariant in its form under mappings of the reference frame in order to comply with the general principle of relativity. The required form-invariance of the action principle implies that the mapping must constitute a particular extended canonical transformation. In the realm of the covariant Hamiltonian formulation of field theory, the term ``extended'' implies that not only the fields but also the space-time geometry is subject to transformation. A canonical transformation maintains the general form of the action principle by simultaneously defining the appropriate transformation rules for the fields, the conjugate momentum fields, and the transformation rule for the Hamiltonian. Provided that the given system of fields exhibits a particular global symmetry, the associated extended canonical transformation determines an amended Hamiltonian that is form-invariant under the corresponding local symmetry. This will be worked out for a Hamiltonian system of scalar and vector fields that is presupposed to be form-invariant under space-time transformations xμ\\mapsto Xμ with partial Xμ/partial xν=const., hence under global space-time transformations such as the Poincaré transformation. The corresponding amended system that is form-invariant under local space-time transformations partial Xμ/partial xν≠qconst. then describes the coupling of the fields to the space-time geometry and thus yields the dynamics of space-time that is associated with the given physical system. Non-zero spin matter determines thereby the space-time curvature via a well-defined source term in a covariant Poisson-type equation for the Riemann tensor.

  17. Inference for Transition Network Grammars,

    DTIC Science & Technology

    1976-01-01

    If the arc Is followed. language L(G) is said to be structurally complete if The power of an augmented transition network (Am) is each rewriting rule ...Clearly, a context-sensitive grammar can be represented as a context—free grarmar plus a set of transformationDbbbbb Eabbbbbb Dbb~~bb Ebbbbbb rules ...are the foun— as a CFG (base) and a set of transformationa l rules . datIons of grammars of different complexities. The The CSL Is obtained by appl

  18. Identifying new persistent and bioaccumulative organics among chemicals in commerce. III: byproducts, impurities, and transformation products.

    PubMed

    Howard, Philip H; Muir, Derek C G

    2013-05-21

    The goal of this series of studies was to identify commercial chemicals that might be persistent and bioaccumulative (PB) and that were not being considered in current wastewater and aquatic environmental measurement programs. In this study, we focus on chemicals that are not on commercial chemical lists such as U.S. EPA's Inventory Update Rule but may be found as byproducts or impurities in commercial chemicals or are likely transformation products from commercial chemical use. We evaluated the 610 chemicals from our earlier publication as well as high production volume chemicals and identified 320 chemicals (39 byproducts and impurities, and 281 transformation products) that could be potential PB chemicals. Four examples are discussed in detail; these chemicals had a fair amount of information on the commercial synthesis and byproducts and impurities that might be found in the commercial product. Unfortunately for many of the 610 chemicals, as well as the transformation products, little or no information was available. Use of computer-aided software to predict the transformation pathways in combination with the biodegradation rules of thumb and some basic organic chemistry has allowed 281 potential PB transformation products to be suggested for some of the 610 commercial chemicals; more PB transformation products were not selected since microbial degradation often results in less persistent and less bioaccumulative metabolites.

  19. The norms, rules and motivational values driving sustainable remediation of contaminated environments: A study of implementation.

    PubMed

    Prior, Jason

    2016-02-15

    Efforts to achieve sustainability are transforming the norms, rules and values that affect the remediation of contaminated environments. This is altering the ways in which remediation impacts on the total environment. Despite this transformation, few studies have provided systematic insights into the diverse norms and rules that drive the implementation of sustainable remediation at contaminated sites, and no studies have investigated how values motivate compliance with these norms and rules. This study is a systematic analysis of the rules, norms and motivational values embedded in sustainable remediation processes at three sites across Australia, using in-depth interviews conducted with 18 participants between 2011 and 2014, through the application of Crawford and Ostrom's Institutional Grammar and Schwartz's value framework. These approaches offered methods for identifying the rules, norms, and motivational values that guided participants' actions within remediation processes at these sites. The findings identify a core set of 16 norms and 18 rules (sanctions) used by participants to implement sustainable remediation at the sites. These norms and rules: define the position of participants within the process, provide means for incorporating sustainability into established remediation practices, and define the scope of outcomes that constitute sustainable remediation. The findings revealed that motivational values focused on public interest and self-interest influenced participants' compliance with norms and rules. The findings also found strong interdependence between the norms and rules (sanctions) within the remediation processes and the normative principles operating within the broader domain of environmental management and planning. The paper concludes with a discussion of: the system of norms operating within sustainable remediation (which far exceed those associated with ESD); their link, through rules (sanctions) to contemporary styles of regulatory enforcement; and the underlying balance of public-interest values and self-interest values that drives participants' involvement in sustainable remediation. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Reusable rocket engine turbopump health monitoring system, part 3

    NASA Technical Reports Server (NTRS)

    Perry, John G.

    1989-01-01

    Degradation mechanisms and sensor identification/selection resulted in a list of degradation modes and a list of sensors that are utilized in the diagnosis of these degradation modes. The sensor list is divided into primary and secondary indicators of the corresponding degradation modes. The signal conditioning requirements are discussed, describing the methods of producing the Space Shuttle Main Engine (SSME) post-hot-fire test data to be utilized by the Health Monitoring System. Development of the diagnostic logic and algorithms is also presented. The knowledge engineering approach, as utilized, includes the knowledge acquisition effort, characterization of the expert's problem solving strategy, conceptually defining the form of the applicable knowledge base, and rule base, and identifying an appropriate inferencing mechanism for the problem domain. The resulting logic flow graphs detail the diagnosis/prognosis procedure as followed by the experts. The nature and content of required support data and databases is also presented. The distinction between deep and shallow types of knowledge is identified. Computer coding of the Health Monitoring System is shown to follow the logical inferencing of the logic flow graphs/algorithms.

  1. Coordination of networked systems on digraphs with multiple leaders via pinning control

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Lewis, Frank L.

    2012-02-01

    It is well known that achieving consensus among a group of multi-vehicle systems by local distributed control is feasible if and only if all nodes in the communication digraph are reachable from a single (root) node. In this article, we take into account a more general case that the communication digraph of the networked multi-vehicle systems is weakly connected and has two or more zero-in-degree and strongly connected subgraphs, i.e. there are two or more leader groups. Based on the pinning control strategy, the feasibility problem of achieving second-order controlled consensus is studied. At first, a necessary and sufficient condition is given when the topology is fixed. Then the method to design the controller and the rule to choose the pinned vehicles are discussed. The proposed approach allows us to extend several existing results for undirected graphs to directed balanced graphs. A sufficient condition is proposed in the case where the coupling topology is variable. As an illustrative example, a second-order controlled consensus scheme is applied to coordinate the movement of networked multiple mobile robots.

  2. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    DOE PAGES

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...

    2016-02-24

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less

  3. Pattern recognition tool based on complex network-based approach

    NASA Astrophysics Data System (ADS)

    Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir

    2013-02-01

    This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.

  4. Artificial intelligence approach to planning the robotic assembly of large tetrahedral truss structures

    NASA Technical Reports Server (NTRS)

    Homemdemello, Luiz S.

    1992-01-01

    An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.

  5. Microstructure of warm rolling and pearlitic transformation of ultrafine-grained GCr15 steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jun-Jie; Lian, Fu-Liang; Liu, Hong-Ji

    2014-09-15

    Pearlitic transformation mechanisms have been investigated in ultra-fine grained GCr15 steel. The ultrafine-grained steel, whose grain size was less than 1 μm, was prepared by thermo-mechanical treatment at 873 K and then annealing at 923 K for 2 h. Pearlitic transformation was conducted by reheating the ultra-fine grained samples at 1073 K and 1123 K for different periods of time and then cooling in air. Scanning electron microscope observation shows that normal lamellar pearlite, instead of granular cementite and ferrite, cannot be formed when the grain size is approximately less than 4(± 0.6) μm, which yields a critical grain sizemore » for normal lamellar pearlitic transformations in this chromium alloyed steel. The result confirms that grain size has a great influence on pearlitic transformation by increasing the diffusion rate of carbon atoms in the ultra-fine grained steel, and the addition of chromium element doesn't change this pearlitic phase transformation rule. Meanwhile, the grain growth rate is reduced by chromium alloying, which is beneficial to form fine grains during austenitizing, thus it facilitating pearlitic transformation by divorced eutectoid transformation. Moreover, chromium element can form a relatively high gradient in the frontier of the undissolved carbide, which promotes carbide formation in the frontier of the undissolved carbide, i.e., chromium promotes divorced eutectoid transformation. - Highlights: • Ultrafine-grained GCr15 steel was obtained by warm rolling and annealing technology. • Reduction of grain size makes pearlite morphology from lamellar to granular. • Adding Cr does not change normal pearlitic phase transformation rule in UFG steel. • Cr carbide resists grain growth and facilitates pearlitic transformation by DET.« less

  6. Information fusion-based approach for studying influence on Twitter using belief theory.

    PubMed

    Azaza, Lobna; Kirgizov, Sergey; Savonnet, Marinette; Leclercq, Éric; Gastineau, Nicolas; Faiz, Rim

    2016-01-01

    Influence in Twitter has become recently a hot research topic, since this micro-blogging service is widely used to share and disseminate information. Some users are more able than others to influence and persuade peers. Thus, studying most influential users leads to reach a large-scale information diffusion area, something very useful in marketing or political campaigns. In this study, we propose a new approach for multi-level influence assessment on multi-relational networks, such as Twitter . We define a social graph to model the relationships between users as a multiplex graph where users are represented by nodes, and links model the different relations between them (e.g., retweets , mentions , and replies ). We explore how relations between nodes in this graph could reveal about the influence degree and propose a generic computational model to assess influence degree of a certain node. This is based on the conjunctive combination rule from the belief functions theory to combine different types of relations. We experiment the proposed method on a large amount of data gathered from Twitter during the European Elections 2014 and deduce top influential candidates. The results show that our model is flexible enough to to consider multiple interactions combination according to social scientists needs or requirements and that the numerical results of the belief theory are accurate. We also evaluate the approach over the CLEF RepLab 2014 data set and show that our approach leads to quite interesting results.

  7. Sponsors' and investigative staffs' perceptions of the current investigational new drug safety reporting process in oncology trials.

    PubMed

    Perez, Raymond; Archdeacon, Patrick; Roach, Nancy; Goodwin, Robert; Jarow, Jonathan; Stuccio, Nina; Forrest, Annemarie

    2017-06-01

    The Food and Drug Administration's final rule on investigational new drug application safety reporting, effective from 28 March 2011, clarified the reporting requirements for serious and unexpected suspected adverse reactions occurring in clinical trials. The Clinical Trials Transformation Initiative released recommendations in 2013 to assist implementation of the final rule; however, anecdotal reports and data from a Food and Drug Administration audit indicated that a majority of reports being submitted were still uninformative and did not result in actionable changes. Clinical Trials Transformation Initiative investigated remaining barriers and potential solutions to full implementation of the final rule by polling and interviewing investigators, clinical research staff, and sponsors. In an opinion-gathering effort, two discrete online surveys designed to assess challenges and motivations related to management of expedited (7- to 15-day) investigational new drug safety reporting processes in oncology trials were developed and distributed to two populations: investigators/clinical research staff and sponsors. Data were collected for approximately 1 year. Twenty-hour-long interviews were also conducted with Clinical Trials Transformation Initiative-nominated interview participants who were considered as having extensive knowledge of and experience with the topic. Interviewees included 13 principal investigators/study managers/research team members and 7 directors/vice presidents of pharmacovigilance operations from 5 large global pharmaceutical companies. The investigative site's responses indicate that too many individual reports are still being submitted, which are time-consuming to process and provide little value for patient safety assessments or for informing actionable changes. Fewer but higher quality reports would be more useful, and the investigator and staff would benefit from sponsors'"filtering" of reports and increased sponsor communication. Sponsors replied that their greatest challenges include (1) lack of global harmonization in reporting rules, (2) determining causality, and (3) fear of regulatory repercussions. Interaction with the Food and Drug Administration has helped improve sponsors' adherence to the final rule, and sponsors would benefit from increased communication with the Food and Drug Administration and educational materials. The goal of the final rule is to minimize uninformative safety reports so that important safety signals can be captured and communicated early enough in a clinical program to make changes that help ensure patient safety. Investigative staff and sponsors acknowledge that the rule has not been fully implemented although they agree with the intention. Clinical Trials Transformation Initiative will use the results from the surveys and interviews to develop new recommendations and educational materials that will be available to sponsors to increase compliance with the final rule and facilitate discussion between sponsors, investigators, and Food and Drug Administration representatives.

  8. Sponsors’ and investigative staffs' perceptions of the current investigational new drug safety reporting process in oncology trials

    PubMed Central

    Perez, Raymond; Archdeacon, Patrick; Roach, Nancy; Goodwin, Robert; Jarow, Jonathan; Stuccio, Nina; Forrest, Annemarie

    2017-01-01

    Background/aims: The Food and Drug Administration’s final rule on investigational new drug application safety reporting, effective from 28 March 2011, clarified the reporting requirements for serious and unexpected suspected adverse reactions occurring in clinical trials. The Clinical Trials Transformation Initiative released recommendations in 2013 to assist implementation of the final rule; however, anecdotal reports and data from a Food and Drug Administration audit indicated that a majority of reports being submitted were still uninformative and did not result in actionable changes. Clinical Trials Transformation Initiative investigated remaining barriers and potential solutions to full implementation of the final rule by polling and interviewing investigators, clinical research staff, and sponsors. Methods: In an opinion-gathering effort, two discrete online surveys designed to assess challenges and motivations related to management of expedited (7- to 15-day) investigational new drug safety reporting processes in oncology trials were developed and distributed to two populations: investigators/clinical research staff and sponsors. Data were collected for approximately 1 year. Twenty-hour-long interviews were also conducted with Clinical Trials Transformation Initiative–nominated interview participants who were considered as having extensive knowledge of and experience with the topic. Interviewees included 13 principal investigators/study managers/research team members and 7 directors/vice presidents of pharmacovigilance operations from 5 large global pharmaceutical companies. Results: The investigative site’s responses indicate that too many individual reports are still being submitted, which are time-consuming to process and provide little value for patient safety assessments or for informing actionable changes. Fewer but higher quality reports would be more useful, and the investigator and staff would benefit from sponsors’“filtering” of reports and increased sponsor communication. Sponsors replied that their greatest challenges include (1) lack of global harmonization in reporting rules, (2) determining causality, and (3) fear of regulatory repercussions. Interaction with the Food and Drug Administration has helped improve sponsors’ adherence to the final rule, and sponsors would benefit from increased communication with the Food and Drug Administration and educational materials. Conclusion: The goal of the final rule is to minimize uninformative safety reports so that important safety signals can be captured and communicated early enough in a clinical program to make changes that help ensure patient safety. Investigative staff and sponsors acknowledge that the rule has not been fully implemented although they agree with the intention. Clinical Trials Transformation Initiative will use the results from the surveys and interviews to develop new recommendations and educational materials that will be available to sponsors to increase compliance with the final rule and facilitate discussion between sponsors, investigators, and Food and Drug Administration representatives. PMID:28345368

  9. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  10. Analytical formulation of cellular automata rules using data models

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.

    2009-05-01

    We present a unique method for converting traditional cellular automata (CA) rules into analytical function form. CA rules have been successfully used for morphological image processing and volumetric shape recognition and classification. Further, the use of CA rules as analog models to the physical and biological sciences can be significantly extended if analytical (as opposed to discrete) models could be formulated. We show that such transformations are possible. We use as our example John Horton Conway's famous "Game of Life" rule set. We show that using Data Modeling, we are able to derive both polynomial and bi-spectrum models of the IF-THEN rules that yield equivalent results. Further, we demonstrate that the "Game of Life" rule set can be modeled using the multi-fluxion, yielding a closed form nth order derivative and integral. All of the demonstrated analytical forms of the CA rule are general and applicable to real-time use.

  11. Matching next-to-leading order predictions to parton showers in supersymmetric QCD

    DOE PAGES

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; ...

    2016-02-03

    We present a fully automated framework based on the FeynRules and MadGraph5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.

  12. Matching next-to-leading order predictions to parton showers in supersymmetric QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin

    We present a fully automated framework based on the FeynRules and MadGraph5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.

  13. Uncovering the overlapping community structure of complex networks by maximal cliques

    NASA Astrophysics Data System (ADS)

    Li, Junqiu; Wang, Xingyuan; Cui, Yaozu

    2014-12-01

    In this paper, a unique algorithm is proposed to detect overlapping communities in the un-weighted and weighted networks with considerable accuracy. The maximal cliques, overlapping vertex, bridge vertex and isolated vertex are introduced. First, all the maximal cliques are extracted by the algorithm based on the deep and bread searching. Then two maximal cliques can be merged into a larger sub-graph by some given rules. In addition, the proposed algorithm successfully finds overlapping vertices and bridge vertices between communities. Experimental results using some real-world networks data show that the performance of the proposed algorithm is satisfactory.

  14. Determining a human cardiac pacemaker using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Varnavsky, A. N.; Antonenco, A. V.

    2017-01-01

    The paper presents a possibility of estimating a human cardiac pacemaker using combined application of nonlinear integral transformation and fuzzy logic, which allows carrying out the analysis in the real-time mode. The system of fuzzy logical conclusion is proposed, membership functions and rules of fuzzy products are defined. It was shown that the ratio of the value of a truth degree of the winning rule condition to the value of a truth degree of any other rule condition is at least 3.

  15. QCD Sum Rules and Models for Generalized Parton Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anatoly Radyushkin

    2004-10-01

    I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.

  16. Development of acceptance criteria for batches of silane primer for external tank thermal protection system bonding applications

    NASA Technical Reports Server (NTRS)

    Mikes, F.

    1984-01-01

    Silane primers for use as thermal protection on external tanks were subjected to various analytic techniques to determine the most effective testing method for silane lot evaluation. The analytic methods included high performance liquid chromatography, gas chromatography, thermogravimetry (TGA), and fourier transform infrared spectroscopy (FTIR). It is suggested that FTIR be used as the method for silane lot evaluation. Chromatograms, TGA profiles, bar graphs showing IR absorbances, and FTIR spectra are presented.

  17. Double-tick realization of binary control program

    NASA Astrophysics Data System (ADS)

    Kobylecki, Michał; Kania, Dariusz

    2016-12-01

    This paper presents a procedure for the implementation of control algorithms for hardware-bit compatible with the standard IEC61131-3. The described transformation based on the sets of calculus and graphs, allows translation of the original form of the control program to the form in full compliance with the original, giving the architecture represented by two tick. The proposed method enables the efficient implementation of the control bits in the FPGA with the use of a standardized programming language LD.

  18. A Whole Word and Number Reading Machine Based on Two Dimensional Low Frequency Fourier Transforms

    DTIC Science & Technology

    1990-12-01

    they are energy normalized. The normalization process accounts for brightness variations and is equivalent to graphing each 2DFT onto the surface of an n...determined empirically (trial and error). Each set is energy normalized based on the number of coefficients within the set. Therefore, the actual...using the 6 font group case with the top 1000 words, where the energy has been renormalized based on the particular number of coefficients being used

  19. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  20. SPROC: A multiple-processor DSP IC

    NASA Technical Reports Server (NTRS)

    Davis, R.

    1991-01-01

    A large, single-chip, multiple-processor, digital signal processing (DSP) integrated circuit (IC) fabricated in HP-Cmos34 is presented. The innovative architecture is best suited for analog and real-time systems characterized by both parallel signal data flows and concurrent logic processing. The IC is supported by a powerful development system that transforms graphical signal flow graphs into production-ready systems in minutes. Automatic compiler partitioning of tasks among four on-chip processors gives the IC the signal processing power of several conventional DSP chips.

  1. Judge Rules Plagiarism-Detection Tool Falls under "Fair Use"

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    Judge Claude M. Hilton, of the U.S. District Court in Alexandria, Virginia, in March found that scanning the student papers for the purpose of detecting plagiarism is a "highly transformative" use that falls under the fair-use provision of copyright law. He ruled that the company "makes no use of any work's particular expressive or creative…

  2. Some Lexical Redundancy Rules for English Nouns.

    ERIC Educational Resources Information Center

    Starosta, Stanley

    In line with current thinking in transformational grammar, syntax as a system can and should be studied before a study is made of the use of that system. Chomsky's lexical redundancy rule is an area for further study, possibly to come closer to defining and achieving explanatory adequacy. If it is observed that English nouns come in two types,…

  3. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  4. Riding the Hype Wave: Evaluating new AI Techniques for their Applicability in Earth Science

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Zhang, J.; Maskey, M.; Lee, T. J.

    2016-12-01

    Every few years a new technology rides the hype wave generated by the computer science community. Converts to this new technology who surface from both the science community and the informatics community promulgate that it can radically improve or even change the existing scientific process. Recent examples of new technology following in the footsteps of "big data" now include deep learning algorithms and knowledge graphs. Deep learning algorithms mimic the human brain and process information through multiple stages of transformation and representation. These algorithms are able to learn complex functions that map pixels directly to outputs without relying on human-crafted features and solve some of the complex classification problems that exist in science. Similarly, knowledge graphs aggregate information around defined topics that enable users to resolve their query without having to navigate and assemble information manually. Knowledge graphs could potentially be used in scientific research to assist in hypothesis formulation, testing, and review. The challenge for the Earth science research community is to evaluate these new technologies by asking the right questions and considering what-if scenarios. What is this new technology enabling/providing that is innovative and different? Can one justify the adoption costs with respect to the research returns? Since nothing comes for free, utilizing a new technology entails adoption costs that may outweigh the benefits. Furthermore, these technologies may require significant computing infrastructure in order to be utilized effectively. Results from two different projects will be presented along with lessons learned from testing these technologies. The first project primarily evaluates deep learning techniques for different applications of image retrieval within Earth science while the second project builds a prototype knowledge graph constructed for Hurricane science.

  5. Developing a modular architecture for creation of rule-based clinical diagnostic criteria.

    PubMed

    Hong, Na; Pathak, Jyotishman; Chute, Christopher G; Jiang, Guoqian

    2016-01-01

    With recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies. The architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation. Our efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.

  6. A Computer Program for Testing Grammars On-Line.

    ERIC Educational Resources Information Center

    Gross, Louis N.

    This paper describes a computer system which is intended to aid the linguist in building a transformational grammar. The program operates as a rule tester, performing three services for the user through sets of functions which allow the user to--specify, change, and print base trees (to which transformations would apply); define transformations…

  7. Chinese Passives: Transformational or Lexical?

    ERIC Educational Resources Information Center

    Zhang, Jiuwu; Wen, Xiaohong

    Analysis of Chinese passive constructions indicates two types. The first is a verbal or syntactic passive because it is derived through a transformational rule. The second is a lexical passive that has certain properties in common with the predicate adjectives in both Chinese and English and is derived through the semantic function and in lexical…

  8. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  9. Phase transitions in the q -voter model with noise on a duplex clique

    NASA Astrophysics Data System (ADS)

    Chmiel, Anna; Sznajd-Weron, Katarzyna

    2015-11-01

    We study a nonlinear q -voter model with stochastic noise, interpreted in the social context as independence, on a duplex network. To study the role of the multilevelness in this model we propose three methods of transferring the model from a mono- to a multiplex network. They take into account two criteria: one related to the status of independence (LOCAL vs GLOBAL) and one related to peer pressure (AND vs OR). In order to examine the influence of the presence of more than one level in the social network, we perform simulations on a particularly simple multiplex: a duplex clique, which consists of two fully overlapped complete graphs (cliques). Solving numerically the rate equation and simultaneously conducting Monte Carlo simulations, we provide evidence that even a simple rearrangement into a duplex topology may lead to significant changes in the observed behavior. However, qualitative changes in the phase transitions can be observed for only one of the considered rules: LOCAL&AND. For this rule the phase transition becomes discontinuous for q =5 , whereas for a monoplex such behavior is observed for q =6 . Interestingly, only this rule admits construction of realistic variants of the model, in line with recent social experiments.

  10. Real-time Geographic Information System (GIS) for Monitoring the Area of Potential Water Level Using Rule Based System

    NASA Astrophysics Data System (ADS)

    Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro

    2018-02-01

    Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.

  11. Generalized serial search code acquisition - The equivalent circular state diagram approach

    NASA Technical Reports Server (NTRS)

    Polydoros, A.; Simon, M. K.

    1984-01-01

    A transform-domain method for deriving the generating function of the acquisition process resulting from an arbitrary serial search strategy is presented. The method relies on equivalent circular state diagrams, uses Mason's formula from flow-graph theory, and employs a minimum number of required parameters. The transform-domain approach is briefly described and the concept of equivalent circular state diagrams is introduced and exploited to derive the generating function and resulting mean acquisition time for three particular cases of interest, the continuous/center Z search, the broken/center Z search, and the expanding window search. An optimization of the latter technique is performed whereby the number of partial windows which minimizes the mean acquisition time is determined. The numerical results satisfy certain intuitive predictions and provide useful design guidelines for such systems.

  12. A departure from cognitivism: Implications of Chomsky's second revolution in linguistics.

    PubMed

    Schoneberger, T

    2000-01-01

    In 1957 Noam Chomsky published Syntactic Structures, expressing views characterized as constituting a "revolution" in linguistics. Chomsky proposed that the proper subject matter of linguistics is not the utterances of speakers, but what speakers and listeners know. To that end, he theorized that what they know is a system of rules that underlie actual performance. This theory became known as transformational grammar. In subsequent versions of this theory, rules continued to play a dominant role. However, in 1980 Chomsky began a second revolution by proposing the elimination of rules in a new theory: the principles-and-parameters approach. Subsequent writings finalized the abandonment of rules. Given the centrality of rules to cognitivism, this paper argues that Chomsky's second revolution constitutes a departure from cognitivism.

  13. Research on the relation of EEG signal chaos characteristics with high-level intelligence activity of human brain

    PubMed Central

    2010-01-01

    Using phase space reconstruct technique from one-dimensional and multi-dimensional time series and the quantitative criterion rule of system chaos, and combining the neural network; analyses, computations and sort are conducted on electroencephalogram (EEG) signals of five kinds of human consciousness activities (relaxation, mental arithmetic of multiplication, mental composition of a letter, visualizing a 3-dimensional object being revolved about an axis, and visualizing numbers being written or erased on a blackboard). Through comparative studies on the determinacy, the phase graph, the power spectra, the approximate entropy, the correlation dimension and the Lyapunov exponent of EEG signals of 5 kinds of consciousness activities, the following conclusions are shown: (1) The statistic results of the deterministic computation indicate that chaos characteristic may lie in human consciousness activities, and central tendency measure (CTM) is consistent with phase graph, so it can be used as a division way of EEG attractor. (2) The analyses of power spectra show that ideology of single subject is almost identical but the frequency channels of different consciousness activities have slight difference. (3) The approximate entropy between different subjects exist discrepancy. Under the same conditions, the larger the approximate entropy of subject is, the better the subject's innovation is. (4) The results of the correlation dimension and the Lyapunov exponent indicate that activities of human brain exist in attractors with fractional dimensions. (5) Nonlinear quantitative criterion rule, which unites the neural network, can classify different kinds of consciousness activities well. In this paper, the results of classification indicate that the consciousness activity of arithmetic has better differentiation degree than that of abstract. PMID:20420714

  14. Research on the relation of EEG signal chaos characteristics with high-level intelligence activity of human brain.

    PubMed

    Wang, Xingyuan; Meng, Juan; Tan, Guilin; Zou, Lixian

    2010-04-27

    Using phase space reconstruct technique from one-dimensional and multi-dimensional time series and the quantitative criterion rule of system chaos, and combining the neural network; analyses, computations and sort are conducted on electroencephalogram (EEG) signals of five kinds of human consciousness activities (relaxation, mental arithmetic of multiplication, mental composition of a letter, visualizing a 3-dimensional object being revolved about an axis, and visualizing numbers being written or erased on a blackboard). Through comparative studies on the determinacy, the phase graph, the power spectra, the approximate entropy, the correlation dimension and the Lyapunov exponent of EEG signals of 5 kinds of consciousness activities, the following conclusions are shown: (1) The statistic results of the deterministic computation indicate that chaos characteristic may lie in human consciousness activities, and central tendency measure (CTM) is consistent with phase graph, so it can be used as a division way of EEG attractor. (2) The analyses of power spectra show that ideology of single subject is almost identical but the frequency channels of different consciousness activities have slight difference. (3) The approximate entropy between different subjects exist discrepancy. Under the same conditions, the larger the approximate entropy of subject is, the better the subject's innovation is. (4) The results of the correlation dimension and the Lyapunov exponent indicate that activities of human brain exist in attractors with fractional dimensions. (5) Nonlinear quantitative criterion rule, which unites the neural network, can classify different kinds of consciousness activities well. In this paper, the results of classification indicate that the consciousness activity of arithmetic has better differentiation degree than that of abstract.

  15. Random walk on lattices: Graph-theoretic approach to simulating long-range diffusion-attachment growth models

    NASA Astrophysics Data System (ADS)

    Limkumnerd, Surachate

    2014-03-01

    Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.

  16. Beauty vector meson decay constants from QCD sum rules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucha, Wolfgang; Melikhov, Dmitri; D. V. Skobeltsyn Institute of Nuclear Physics, M. V. Lomonosov Moscow State University, 119991, Moscow

    We present the outcomes of a very recent investigation of the decay constants of nonstrange and strange heavy-light beauty vector mesons, with special emphasis on the ratio of any such decay constant to the decay constant of the corresponding pseudoscalar meson, by means of Borel-transformed QCD sum rules. Our results suggest that both these ratios are below unity.

  17. A comparison study of atlas-based 3D cardiac MRI segmentation: global versus global and local transformations

    NASA Astrophysics Data System (ADS)

    Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  18. Cavernous Transformation of Portal Vein Secondary to Portal Vein Thrombosis: A Case Report

    PubMed Central

    Ramos, Radhames; Park, Yoojin; Shazad, Ghulamullah; A.Garcia, Christine; Cohen, Ronny

    2012-01-01

    There are few reported cases of cavernous transformation of the portal vein (CTPV) in adults. We present a case of a 58 year-old male who was found to have this complication due to portal vein thrombosis (PVT). A 58-year old African American male with chronic alcohol and tobacco use presented with a 25-day history of weakness, generalized malaise, nausea and vomiting associated with progressively worsening anorexia and weight loss. The patient was admitted for severe anemia in conjunction with abnormal liver function tests and electrolyte abnormalities, and to rule out end stage liver disease or hepatic malignancy. The work-up for anemia showed no significant colon abnormalities, cholecystitis, liver cirrhosis, or liver abnormalities but could not rule out malignancy. An esophageogastroduodenoscopy (EGD) was suspicious for a mass compressing the stomach and small bowel. After further work-up, the hepatic mass has been diagnosed as a cavernous transformation of the portal vein (CTPV), a very rare complication of portal vein thrombosis (PVT). Cavernous Transformation of the Portal Vein (CTPV) is a rare and incurable complication of portal vein thrombosis (PVT) that should be considered as one of the differential diagnoses of a hepatic mass. Keywords Cavernous transformation of the portal vein; Portal vein thrombosis; Portal hypertension; Hyperbilirubinemia; Hepatic mass PMID:22383935

  19. Symmetry rules for the indirect nuclear spin-spin coupling tensor revisited

    NASA Astrophysics Data System (ADS)

    Buckingham, A. D.; Pyykkö, P.; Robert, J. B.; Wiesenfeld, L.

    The symmetry rules of Buckingham and Love (1970), relating the number of independent components of the indirect spin-spin coupling tensor J to the symmetry of the nuclear sites, are shown to require modification if the two nuclei are exchanged by a symmetry operation. In that case, the anti-symmetric part of J does not transform as a second-rank polar tensor under symmetry operations that interchange the coupled nuclei and may be called an anti-tensor. New rules are derived and illustrated by simple molecular models.

  20. An Algebraic Approach to the Study and Optimization of the Set of Rules of a Conditional Rewrite System

    NASA Astrophysics Data System (ADS)

    Makhortov, S. D.

    2018-03-01

    An algebraic system containing the semantics of a set of rules of the conditional equational theory (or the conditional term rewriting system) is introduced. The following basic questions are considered for the given model: existence of logical closure, structure of logical closure, possibility of equivalent transformations, and construction of logical reduction. The obtained results can be applied to the analysis and automatic optimization of the corresponding set of rules. The basis for the given research is the theory of lattices and binary relations.

  1. Influence of topology in the mobility enhancement of pulse-coupled oscillator synchronization

    NASA Astrophysics Data System (ADS)

    Beardo, A.; Prignano, L.; Sagarra, O.; Díaz-Guilera, A.

    2017-12-01

    In this work we revisit the nonmonotonic behavior (NMB) of synchronization time with velocity reported for systems of mobile pulse-coupled oscillators (PCOs). We devise a control parameter that allows us to predict in which range of velocities NMB may occur, also uncovering the conditions allowing us to establish the emergence of NMB based on specific features of the connectivity rule. Specifically, our results show that if the connectivity rule is such that the interaction patterns are sparse and, more importantly, include a large fraction of nonreciprocal interactions, then the system will display NMB. We furthermore provide a microscopic explanation relating the presence of such features of the connectivity patterns to the existence of local clusters unable to synchronize, termed frustrated clusters, for which we also give a precise definition in terms of simple graph concepts. We conclude that, if the probability of finding a frustrated cluster in a system of moving PCOs is high enough, NMB occurs in a predictable range of velocities.

  2. Syntactic sequencing in Hebbian cell assemblies.

    PubMed

    Wennekers, Thomas; Palm, Günther

    2009-12-01

    Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.

  3. Generating Stock Trading Rules Using Genetic Network Programming with Flag Nodes and Adjustment of Importance Indexes

    NASA Astrophysics Data System (ADS)

    Mabu, Shingo; Chen, Yan; Hirasawa, Kotaro

    Genetic Network Programming (GNP) is an evolutionary algorithm which represents its solutions using graph structures. Since GNP can create quite compact programs and has an implicit memory function, GNP works well especially in dynamic environments. In addition, a study on creating trading rules on stock markets using GNP with Importance Index (GNP-IMX) has been done. IMX is one of the criterions for decision making. However, the values of IMXs must be deteminined by our experience/knowledge. Therefore in this paper, IMXs are adjusted appropriately during the stock trading in order to predict the rise and fall of the stocks. Moreover, newly defined flag nodes are introduced to GNP, which can appropriately judge the current situation of the stock prices, and also contributes to the use of many kinds of nodes in GNP program. In the simulation, programs are evolved using the stock prices of 20 companies. Then the generalization ability is tested and compared with GNP without flag nodes, GNP without IMX adjustment and Buy&Hold.

  4. Performing transformation: reflections of a lesbian academic couple.

    PubMed

    Gibson, Michelle; Meem, Deborah T

    2005-01-01

    We experience queer literacy as a kind of collision between the traditional and the transformative. Queer literacy is an acquired literacy of transformation, where the established rules of behavior and discourse are both challenged and transcended. As a lesbian academic couple in a privileged intellectual, political, and social location, we can move out of the traditional realm (through the closet) into an otherworldly queer space where knowledge and identity are destabilized. Moving in and out of queer transformative space requires a kind of blind faith-faith that believes in what the mind can neither see nor prove.

  5. Literature mining of protein-residue associations with graph rules learned through distant supervision.

    PubMed

    Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin

    2012-10-05

    We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.

  6. Operational Monitoring of Data Production at KNMI

    NASA Astrophysics Data System (ADS)

    van de Vegte, John; Kwidama, Anecita; van Moosel, Wim; Oosterhof, Rijk; de Wit de Wit, Ronny; Klein Ikkink, Henk Jan; Som de Cerff, Wim; Verhoef, Hans; Koutek, Michal; Duin, Frank; van der Neut, Ian; verhagen, Robert; Wollerich, Rene

    2016-04-01

    Within KNMI a new fully automated system for monitoring the KNMI operational data production systems is being developed: PRISMA (PRocessflow Infrastructure Surveillance and Monitoring Application). Currently the KNMI operational (24/7) production systems consist of over 60 applications, running on different hardware systems and platforms. They are interlinked for the production of numerous data products, which are delivered to internal and external customers. Traditionally these applications are individually monitored by different applications or not at all; complicating root cause and impact analysis. Also, the underlying hardware and network is monitored via an isolated application. Goal of the PRISMA system is to enable production chain monitoring, which enables root cause analysis (what is the root cause of the disruption) and impact analysis (what downstream products/customers will be effected). The PRISMA system will make it possible to reduce existing monitoring applications and provides one interface for monitoring the data production. For modeling and storing the state of the production chains a graph database is used. The model is automatically updated by the applications and systems which are to be monitored. The graph models enables root cause and impact analysis. In the PRISMA web interface interaction with the graph model is accomplished via a graphical representation. The presentation will focus on aspects of: • Modeling real world computers, applications, products to a conceptual model; • Architecture of the system; • Configuration information and (real world) event handling of the to be monitored objects; • Implementation rules for root cause and impact analysis. • How PRISMA was developed (methodology, facts, results) • Presentation of the PRISMA system as it now looks and works

  7. Derivatives in discrete mathematics: a novel graph-theoretical invariant for generating new 2/3D molecular descriptors. I. Theory and QSPR application

    NASA Astrophysics Data System (ADS)

    Marrero-Ponce, Yovani; Santiago, Oscar Martínez; López, Yoan Martínez; Barigye, Stephen J.; Torrens, Francisco

    2012-11-01

    In this report, we present a new mathematical approach for describing chemical structures of organic molecules at atomic-molecular level, proposing for the first time the use of the concept of the derivative ( partial ) of a molecular graph (MG) with respect to a given event ( E), to obtain a new family of molecular descriptors (MDs). With this purpose, a new matrix representation of the MG, which generalizes graph's theory's traditional incidence matrix, is introduced. This matrix, denominated the generalized incidence matrix, Q, arises from the Boolean representation of molecular sub- graphs that participate in the formation of the graph molecular skeleton MG and could be complete (representing all possible connected sub-graphs) or constitute sub-graphs of determined orders or types as well as a combination of these. The Q matrix is a non-quadratic and unsymmetrical in nature, its columns ( n) and rows ( m) are conditions (letters) and collection of conditions (words) with which the event occurs. This non-quadratic and unsymmetrical matrix is transformed, by algebraic manipulation, to a quadratic and symmetric matrix known as relations frequency matrix, F, which characterizes the participation intensity of the conditions (letters) in the events (words). With F, we calculate the derivative over a pair of atomic nuclei. The local index for the atomic nuclei i, Δ i , can therefore be obtained as a linear combination of all the pair derivatives of the atomic nuclei i with all the rest of the j's atomic nuclei. Here, we also define new strategies that generalize the present form of obtaining global or local (group or atom-type) invariants from atomic contributions (local vertex invariants, LOVIs). In respect to this, metric (norms), means and statistical invariants are introduced. These invariants are applied to a vector whose components are the values Δ i for the atomic nuclei of the molecule or its fragments. Moreover, with the purpose of differentiating among different atoms, an atomic weighting scheme (atom-type labels) is used in the formation of the matrix Q or in LOVIs state. The obtained indices were utilized to describe the partition coefficient (Log P) and the reactivity index (Log K) of the 34 derivatives of 2-furylethylenes. In all the cases, our MDs showed better statistical results than those previously obtained using some of the most used families of MDs in chemometric practice. Therefore, it has been demonstrated to that the proposed MDs are useful in molecular design and permit obtaining easier and robust mathematical models than the majority of those reported in the literature. All this range of mentioned possibilities open "the doors" to the creation of a new family of MDs, using the graph derivative, and avail a new tool for QSAR/QSPR and molecular diversity/similarity studies.

  8. Derivatives in discrete mathematics: a novel graph-theoretical invariant for generating new 2/3D molecular descriptors. I. Theory and QSPR application.

    PubMed

    Marrero-Ponce, Yovani; Santiago, Oscar Martínez; López, Yoan Martínez; Barigye, Stephen J; Torrens, Francisco

    2012-11-01

    In this report, we present a new mathematical approach for describing chemical structures of organic molecules at atomic-molecular level, proposing for the first time the use of the concept of the derivative ([Formula: see text]) of a molecular graph (MG) with respect to a given event (E), to obtain a new family of molecular descriptors (MDs). With this purpose, a new matrix representation of the MG, which generalizes graph's theory's traditional incidence matrix, is introduced. This matrix, denominated the generalized incidence matrix, Q, arises from the Boolean representation of molecular sub-graphs that participate in the formation of the graph molecular skeleton MG and could be complete (representing all possible connected sub-graphs) or constitute sub-graphs of determined orders or types as well as a combination of these. The Q matrix is a non-quadratic and unsymmetrical in nature, its columns (n) and rows (m) are conditions (letters) and collection of conditions (words) with which the event occurs. This non-quadratic and unsymmetrical matrix is transformed, by algebraic manipulation, to a quadratic and symmetric matrix known as relations frequency matrix, F, which characterizes the participation intensity of the conditions (letters) in the events (words). With F, we calculate the derivative over a pair of atomic nuclei. The local index for the atomic nuclei i, Δ(i), can therefore be obtained as a linear combination of all the pair derivatives of the atomic nuclei i with all the rest of the j's atomic nuclei. Here, we also define new strategies that generalize the present form of obtaining global or local (group or atom-type) invariants from atomic contributions (local vertex invariants, LOVIs). In respect to this, metric (norms), means and statistical invariants are introduced. These invariants are applied to a vector whose components are the values Δ(i) for the atomic nuclei of the molecule or its fragments. Moreover, with the purpose of differentiating among different atoms, an atomic weighting scheme (atom-type labels) is used in the formation of the matrix Q or in LOVIs state. The obtained indices were utilized to describe the partition coefficient (Log P) and the reactivity index (Log K) of the 34 derivatives of 2-furylethylenes. In all the cases, our MDs showed better statistical results than those previously obtained using some of the most used families of MDs in chemometric practice. Therefore, it has been demonstrated to that the proposed MDs are useful in molecular design and permit obtaining easier and robust mathematical models than the majority of those reported in the literature. All this range of mentioned possibilities open "the doors" to the creation of a new family of MDs, using the graph derivative, and avail a new tool for QSAR/QSPR and molecular diversity/similarity studies.

  9. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  10. Neoliberal Justice and the Transformation of the Moral: The Privatization of the Right to Health Care in Colombia.

    PubMed

    Abadía-Barrero, César Ernesto

    2016-03-01

    Neoliberal reforms have transformed the legislative scope and everyday dynamics around the right to health care from welfare state social contracts to insurance markets administered by transnational financial capital. This article presents experiences of health care-seeking treatment, judicial rulings about the right to health care, and market-based health care legislation in Colombia. When insurance companies deny services, citizens petition the judiciary to issue a writ affirming their right to health care. The judiciary evaluates the finances of all relevant parties to rule whether a service should be provided and who should be responsible for the costs. A 2011 law claimed that citizens who demand, physicians who prescribe, and judges who grant uncovered services use the system's limited economic resources and undermine the state's capacity to expand coverage to the poor. This article shows how the consolidation of neoliberal ideology in health care requires the transformation of moral values around life. © 2015 by the American Anthropological Association.

  11. S-F graphic representation analysis of photoelectric facula focometer poroo-plate glass

    NASA Astrophysics Data System (ADS)

    Tong, Yilin; Han, Xuecai

    2016-10-01

    Optical system focal length is usually based on the magnification method with focal length measurement poroo-plate glass is used as base element measuring focal length of focometer. On the basis of using analysis of magnification method to measure the accuracy of optical lens focal length, an expression between the ruling span of poroo-plate glass and the focal length of measured optical system was deduced, an efficient method to work out S-F graph with AUTOCAD was developed, the selecting principle of focometer parameter was analyzed, and Applied examples for designing poroo-plate glass in S-F figure was obtained.

  12. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  13. Protein annotation from protein interaction networks and Gene Ontology.

    PubMed

    Nguyen, Cao D; Gardiner, Katheleen J; Cios, Krzysztof J

    2011-10-01

    We introduce a novel method for annotating protein function that combines Naïve Bayes and association rules, and takes advantage of the underlying topology in protein interaction networks and the structure of graphs in the Gene Ontology. We apply our method to proteins from the Human Protein Reference Database (HPRD) and show that, in comparison with other approaches, it predicts protein functions with significantly higher recall with no loss of precision. Specifically, it achieves 51% precision and 60% recall versus 45% and 26% for Majority and 24% and 61% for χ²-statistics, respectively. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. A Risk Assessment System with Automatic Extraction of Event Types

    NASA Astrophysics Data System (ADS)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  15. A departure from cognitivism: Implications of Chomsky's second revolution in linguistics

    PubMed Central

    Schoneberger, Ted

    2000-01-01

    In 1957 Noam Chomsky published Syntactic Structures, expressing views characterized as constituting a “revolution” in linguistics. Chomsky proposed that the proper subject matter of linguistics is not the utterances of speakers, but what speakers and listeners know. To that end, he theorized that what they know is a system of rules that underlie actual performance. This theory became known as transformational grammar. In subsequent versions of this theory, rules continued to play a dominant role. However, in 1980 Chomsky began a second revolution by proposing the elimination of rules in a new theory: the principles-and-parameters approach. Subsequent writings finalized the abandonment of rules. Given the centrality of rules to cognitivism, this paper argues that Chomsky's second revolution constitutes a departure from cognitivism. PMID:22477214

  16. Unsteady boundary layer flow and heat transfer of a Casson fluid past an oscillating vertical plate with Newtonian heating.

    PubMed

    Hussanan, Abid; Zuki Salleh, Mohd; Tahar, Razman Mat; Khan, Ilyas

    2014-01-01

    In this paper, the heat transfer effect on the unsteady boundary layer flow of a Casson fluid past an infinite oscillating vertical plate with Newtonian heating is investigated. The governing equations are transformed to a systems of linear partial differential equations using appropriate non-dimensional variables. The resulting equations are solved analytically by using the Laplace transform method and the expressions for velocity and temperature are obtained. They satisfy all imposed initial and boundary conditions and reduce to some well-known solutions for Newtonian fluids. Numerical results for velocity, temperature, skin friction and Nusselt number are shown in various graphs and discussed for embedded flow parameters. It is found that velocity decreases as Casson parameters increases and thermal boundary layer thickness increases with increasing Newtonian heating parameter.

  17. Parallel approach on sorting of genes in search of optimal solution.

    PubMed

    Kumar, Pranav; Sahoo, G

    2018-05-01

    An important tool for comparing genome analysis is the rearrangement event that can transform one given genome into other. For finding minimum sequence of fission and fusion, we have proposed here an algorithm and have shown a transformation example for converting the source genome into the target genome. The proposed algorithm comprises of circular sequence i.e. "cycle graph" in place of mapping. The main concept of algorithm is based on optimal result of permutation. These sorting processes are performed in constant running time by showing permutation in the form of cycle. In biological instances it has been observed that transposition occurs half of the frequency as that of reversal. In this paper we are not dealing with reversal instead commencing with the rearrangement of fission, fusion as well as transposition. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Heat Transfer Analysis for Stationary Boundary Layer Slip Flow of a Power-Law Fluid in a Darcy Porous Medium with Plate Suction/Injection

    PubMed Central

    Aziz, Asim; Ali, Yasir; Aziz, Taha; Siddique, J. I.

    2015-01-01

    In this paper, we investigate the slip effects on the boundary layer flow and heat transfer characteristics of a power-law fluid past a porous flat plate embedded in the Darcy type porous medium. The nonlinear coupled system of partial differential equations governing the flow and heat transfer of a power-law fluid is transformed into a system of nonlinear coupled ordinary differential equations by applying a suitable similarity transformation. The resulting system of ordinary differential equations is solved numerically using Matlab bvp4c solver. Numerical results are presented in the form of graphs and the effects of the power-law index, velocity and thermal slip parameters, permeability parameter, suction/injection parameter on the velocity and temperature profiles are examined. PMID:26407162

  19. Rotating flow of carbon nanotube over a stretching surface in the presence of magnetic field: a comparative study

    NASA Astrophysics Data System (ADS)

    Acharya, Nilankush; Das, Kalidas; Kundu, Prabir Kumar

    2018-04-01

    In this piece of writing, we have demonstrated the rotating flow of carbon nanotube passing over a stretching sheet. Two types of carbon nanotube, i.e. single-wall carbon nanotube (SWCNT) and multi-wall carbon nanotube, (MWCNT) have been employed to illustrate the fine points of the flow. Suitable transformations have been consumed to construct its non-dimensional appearance from the partial ones. Transformed forms of equations have been sketched out by RK-4 procedure. Outcomes of the key flow factors on velocity along with temperature outline have been exemplified through tables and graphs, and scrutinized from the sensible judgement. Our investigation authenticates that the temperature of the fluid enhances owing to the improvisation of rotation parameter. Nusselt number goes down with the authority of magnetic parameter.

  20. Tables of the Inverse Laplace Transform of the Function [Formula: see text].

    PubMed

    Dishon, Menachem; Bendler, John T; Weiss, George H

    1990-01-01

    The inverse transform, [Formula: see text], 0 < β < 1, is a stable law that arises in a number of different applications in chemical physics, polymer physics, solid-state physics, and applied mathematics. Because of its important applications, a number of investigators have suggested approximations to g ( t ). However, there have so far been no accurately calculated values available for checking or other purposes. We present here tables, accurate to six figures, of g ( t ) for a number of values of β between 0.25 and 0.999. In addition, since g ( t ), regarded as a function of β , is uni-modal with a peak occurring at t = t max we both tabulate and graph t max and 1/ g ( t max ) as a function of β , as well as giving polynomial approximations to 1/ g ( t max ).

  1. Tables of the Inverse Laplace Transform of the Function e−sβ

    PubMed Central

    Dishon, Menachem; Bendler, John T.; Weiss, George H.

    1990-01-01

    The inverse transform, g(t)=L−1(e−sβ), 0 < β < 1, is a stable law that arises in a number of different applications in chemical physics, polymer physics, solid-state physics, and applied mathematics. Because of its important applications, a number of investigators have suggested approximations to g(t). However, there have so far been no accurately calculated values available for checking or other purposes. We present here tables, accurate to six figures, of g(t) for a number of values of β between 0.25 and 0.999. In addition, since g(t), regarded as a function of β, is uni-modal with a peak occurring at t = tmax we both tabulate and graph tmax and 1/g(tmax) as a function of β, as well as giving polynomial approximations to 1/g(tmax). PMID:28179785

  2. Graph Theory Meets Ab Initio Molecular Dynamics: Atomic Structures and Transformations at the Nanoscale

    NASA Astrophysics Data System (ADS)

    Pietrucci, Fabio; Andreoni, Wanda

    2011-08-01

    Social permutation invariant coordinates are introduced describing the bond network around a given atom. They originate from the largest eigenvalue and the corresponding eigenvector of the contact matrix, are invariant under permutation of identical atoms, and bear a clear signature of an order-disorder transition. Once combined with ab initio metadynamics, these coordinates are shown to be a powerful tool for the discovery of low-energy isomers of molecules and nanoclusters as well as for a blind exploration of isomerization, association, and dissociation reactions.

  3. Distributed consensus for discrete-time heterogeneous multi-agent systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huanyu; Fei, Shumin

    2018-06-01

    This paper studies the consensus problem for a class of discrete-time heterogeneous multi-agent systems. Two kinds of consensus algorithms will be considered. The heterogeneous multi-agent systems considered are converted into equivalent error systems by a model transformation. Then we analyse the consensus problem of the original systems by analysing the stability problem of the error systems. Some sufficient conditions for consensus of heterogeneous multi-agent systems are obtained by applying algebraic graph theory and matrix theory. Simulation examples are presented to show the usefulness of the results.

  4. Comment legitimer une innovation theorique en grammaire transformationelle: la theorie des traces (How to Legitimize a Theoretical Innovation in Transformational Grammar: The Trace Theory)

    ERIC Educational Resources Information Center

    Pollock, J. Y.

    1976-01-01

    Taking as an example the "trace theory" of movement rules developed at MIT, the article shows the conditions to which a theoretical innovation must conform in order to be considered legitimate in the context of transformational grammar's "Extended Standard Theory." (Text is in French.) (CDSH/AM)

  5. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    PubMed

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  6. Directed acyclic graphs (DAGs): an aid to assess confounding in dental research.

    PubMed

    Merchant, Anwar T; Pitiphat, Waranuch

    2002-12-01

    Confounding, a special type of bias, occurs when an extraneous factor is associated with the exposure and independently affects the outcome. In order to get an unbiased estimate of the exposure-outcome relationship, we need to identify potential confounders, collect information on them, design appropriate studies, and adjust for confounding in data analysis. However, it is not always clear which variables to collect information on and adjust for in the analyses. Inappropriate adjustment for confounding can even introduce bias where none existed. Directed acyclic graphs (DAGs) provide a method to select potential confounders and minimize bias in the design and analysis of epidemiological studies. DAGs have been used extensively in expert systems and robotics. Robins (1987) introduced the application of DAGs in epidemiology to overcome shortcomings of traditional methods to control for confounding, especially as they related to unmeasured confounding. DAGs provide a quick and visual way to assess confounding without making parametric assumptions. We introduce DAGs, starting with definitions and rules for basic manipulation, stressing more on applications than theory. We then demonstrate their application in the control of confounding through examples of observational and cross-sectional epidemiological studies.

  7. Growth dominates choice in network percolation

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Waagen, Alex; D'Souza, Raissa M.

    2013-09-01

    The onset of large-scale connectivity in a network (i.e., percolation) often has a major impact on the function of the system. Traditionally, graph percolation is analyzed by adding edges to a fixed set of initially isolated nodes. Several years ago, it was shown that adding nodes as well as edges to the graph can yield an infinite order transition, which is much smoother than the traditional second-order transition. More recently, it was shown that adding edges via a competitive process to a fixed set of initially isolated nodes can lead to a delayed, extremely abrupt percolation transition with a significant jump in large but finite systems. Here we analyze a process that combines both node arrival and edge competition. If started from a small collection of seed nodes, we show that the impact of node arrival dominates: although we can significantly delay percolation, the transition is of infinite order. Thus, node arrival can mitigate the trade-off between delay and abruptness that is characteristic of explosive percolation transitions. This realization may inspire new design rules where network growth can temper the effects of delay, creating opportunities for network intervention and control.

  8. Resistance and Security Index of Networks: Structural Information Perspective of Network Security

    NASA Astrophysics Data System (ADS)

    Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng

    2016-06-01

    Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.

  9. Resistance and Security Index of Networks: Structural Information Perspective of Network Security.

    PubMed

    Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng

    2016-06-03

    Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.

  10. Resistance and Security Index of Networks: Structural Information Perspective of Network Security

    PubMed Central

    Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng

    2016-01-01

    Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks. PMID:27255783

  11. Exact sum rules for inhomogeneous strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amore, Paolo, E-mail: paolo.amore@gmail.com

    2013-11-15

    We derive explicit expressions for the sum rules of the eigenvalues of inhomogeneous strings with arbitrary density and with different boundary conditions. We show that the sum rule of order N may be obtained in terms of a diagrammatic expansion, with (N−1)!/2 independent diagrams. These sum rules are used to derive upper and lower bounds to the energy of the fundamental mode of an inhomogeneous string; we also show that it is possible to improve these approximations taking into account the asymptotic behavior of the spectrum and applying the Shanks transformation to the sequence of approximations obtained to the differentmore » orders. We discuss three applications of these results. -- Highlights: •We derive an explicit expression for the sum rules of an inhomogeneous string. •We obtain a diagrammatic representation for the sum rules of a given order. •We obtain precise bounds on the lowest eigenvalue of the string.« less

  12. Blastoid Body Size - Changes from the Carboniferous to the End-Permian

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Tolosa, R.; Heim, N. A.; Payne, J.

    2013-12-01

    Climate, known for affecting biodiversity within genera of animal species, is often addressed as a major variable of geological systems. The Mississippian subperiod of the Carboniferous was noted for its lush, tropical climate that sustained a variety of biological life. In contrast, the Permian era was marked primarily by an ice age that had started earlier during the Pennsylvanian. The blastoids, a class of the Echinodermata phylum, were in existence from the Silurian (443.4 Ma) to the end of the Permian (252.28 Ma). This study focused on whether climate affected blastoid theca size over the span of those one hundred million years between the Mississippian and the Permian or if was simply a negligible factor. We analyzed size data from the Treatise on Invertebrate Paleontology and correlated it to both Cope's Rule, which states that size increases with geologic time, and Bergmann's Rule, which states that latitude and temperature are catalysts for size change. CO2 levels from known records served as a proxy for global temperature. Our results indicated that the blastoids increased in size by 59% over geologic time. The size of the blastoids increased over geologic time, following Cope's Rule. According to our graphs in R, there was an inverse relationship between volume and climate. Size decreased as temperature increased, which follows Bergmann's Rule. However, we also wanted to observe spatial factors regarding Bergmann's Rule such as paleolatitude and paleolongitude. This info was taken from the Paleobiology Database and showed that a majority of the blastoids were found near the equator, which, according to the other part of Bergmann's Rule, suggests that they would therefore increase in size. Further tests implied strong correlations between temperature, volume, and paleolocation. We ultimately believe that although Cope's Rule is in effect, Bergmann's mechanisms for size may not apply to the blastoids due to the environments that the blastoids lived in or their anatomical compositions.

  13. Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.

    PubMed

    Park, Chihyun; Ahn, Jaegyoon; Kim, Hyunjin; Park, Sanghyun

    2014-01-01

    The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.

  14. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    NASA Astrophysics Data System (ADS)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  15. Managing curriculum transformation within strict university governance structures: an example from Damascus University Medical School.

    PubMed

    Kayyal, Mohammad; Gibbs, Trevor

    2012-01-01

    As the world of medical education moves forward, it becomes increasingly clear that the transformative process is not as easy a process for all. Across the globe, there appears to be many barriers that obstruct or threaten innovation and change, most of which cause almost insurmountable problems to many schools. If transformative education is to result in an equitable raising of standards across such an unlevel playing field, schools have to find ways in overcoming these barriers. One seemingly common barrier to development occurs when medical schools are trapped within strict University governance structures; rules and regulations which are frequently inappropriate and obstructive to the transformation that must occur in today's medical educational paradigm. The Faculty of Medicine at Damascus University, one of the oldest and foremost medical schools in the Middle East, is one such school where rigid rules and regulations and traditional values are obstructing transformative change. This paper describes the problems, which the authors believe to be common to many, and explores how attempts have been made to overcome them and move the school into the twenty-first century. It is the ultimate purpose of this paper to raise awareness of the issue, share the lessons learned in order to assist others who are experiencing similar problems and possibly create opportunities for dialogue between schools.

  16. Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification

    NASA Astrophysics Data System (ADS)

    Wang, X. P.; Hu, Y.; Chen, J.

    2018-04-01

    Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.

  17. Limit of validity of Ostwald's rule of stages in a statistical mechanical model of crystallization.

    PubMed

    Hedges, Lester O; Whitelam, Stephen

    2011-10-28

    We have only rules of thumb with which to predict how a material will crystallize, chief among which is Ostwald's rule of stages. It states that the first phase to appear upon transformation of a parent phase is the one closest to it in free energy. Although sometimes upheld, the rule is without theoretical foundation and is not universally obeyed, highlighting the need for microscopic understanding of crystallization controls. Here we study in detail the crystallization pathways of a prototypical model of patchy particles. The range of crystallization pathways it exhibits is richer than can be predicted by Ostwald's rule, but a combination of simulation and analytic theory reveals clearly how these pathways are selected by microscopic parameters. Our results suggest strategies for controlling self-assembly pathways in simulation and experiment.

  18. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  19. Graphs, matrices, and the GraphBLAS: Seven good reasons

    DOE PAGES

    Kepner, Jeremy; Bader, David; Buluç, Aydın; ...

    2015-01-01

    The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implementmore » a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.« less

  20. Adjusting protein graphs based on graph entropy.

    PubMed

    Peng, Sheng-Lung; Tsay, Yu-Wei

    2014-01-01

    Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid.

  1. Adjusting protein graphs based on graph entropy

    PubMed Central

    2014-01-01

    Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid. PMID:25474347

  2. Deciding where to attend: Large-scale network mechanisms underlying attention and intention revealed by graph-theoretic analysis.

    PubMed

    Liu, Yuelu; Hong, Xiangfei; Bengson, Jesse J; Kelley, Todd A; Ding, Mingzhou; Mangun, George R

    2017-08-15

    The neural mechanisms by which intentions are transformed into actions remain poorly understood. We investigated the network mechanisms underlying spontaneous voluntary decisions about where to focus visual-spatial attention (willed attention). Graph-theoretic analysis of two independent datasets revealed that regions activated during willed attention form a set of functionally-distinct networks corresponding to the frontoparietal network, the cingulo-opercular network, and the dorsal attention network. Contrasting willed attention with instructed attention (where attention is directed by external cues), we observed that the dorsal anterior cingulate cortex was allied with the dorsal attention network in instructed attention, but shifted connectivity during willed attention to interact with the cingulo-opercular network, which then mediated communications between the frontoparietal network and the dorsal attention network. Behaviorally, greater connectivity in network hubs, including the dorsolateral prefrontal cortex, the dorsal anterior cingulate cortex, and the inferior parietal lobule, was associated with faster reaction times. These results, shown to be consistent across the two independent datasets, uncover the dynamic organization of functionally-distinct networks engaged to support intentional acts. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A python tool for the implementation of domain-specific languages

    NASA Astrophysics Data System (ADS)

    Dejanović, Igor; Vaderna, Renata; Milosavljević, Gordana; Simić, Miloš; Vuković, Željko

    2017-07-01

    In this paper we describe textX, a meta-language and a tool for building Domain-Specific Languages. It is implemented in Python using Arpeggio PEG (Parsing Expression Grammar) parser library. From a single language description (grammar) textX will build a parser and a meta-model (a.k.a. abstract syntax) of the language. The parser is used to parse textual representations of models conforming to the meta-model. As a result of parsing, a Python object graph will be automatically created. The structure of the object graph will conform to the meta-model defined by the grammar. This approach frees a developer from the need to manually analyse a parse tree and transform it to other suitable representation. The textX library is independent of any integrated development environment and can be easily integrated in any Python project. The textX tool works as a grammar interpreter. The parser is configured at run-time using the grammar. The textX tool is a free and open-source project available at GitHub.

  4. Incorporation of bentonite clay in cassava starch films for the reduction of water vapor permeability.

    PubMed

    Monteiro, M K S; Oliveira, V R L; Santos, F K G; Barros Neto, E L; Leite, R H L; Aroucha, E M M; Silva, R R; Silva, K N O

    2018-03-01

    Complete factorial planning 2 3 was applied to identify the influence of the cassava starch(A), glycerol(B) and modified clay(C) content on the water vapor permeability(WVP) of the cassava starch films with the addition of bentonite clay as a filler, its surface was modified by ion exchange from cetyltrimethyl ammonium bromide. The films were characterized by X-ray diffraction(XRD), fourier transform by infrared radiation(FTIR), atomic force microscopy(AFM) and scanning electron microscopy(SEM). The factorial analysis suggested a mathematical model thats predicting the optimal condition of the minimization of WVP. The influence of each individual factor and interaction in the WVP was investigated by Pareto graph, response surface and the optimization was established by the desirability function. The sequence of the degree of statistical significance of the investigated effects on the WVP observed in the Pareto graph was C>B>A>BC>AC. Interactions AB, BC and AC showed that the modified clay was the factor of greater significance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Quantifying the web browser ecosystem

    PubMed Central

    Ferdman, Sela; Minkov, Einat; Gefen, David

    2017-01-01

    Contrary to the assumption that web browsers are designed to support the user, an examination of a 900,000 distinct PCs shows that web browsers comprise a complex ecosystem with millions of addons collaborating and competing with each other. It is possible for addons to “sneak in” through third party installations or to get “kicked out” by their competitors without user involvement. This study examines that ecosystem quantitatively by constructing a large-scale graph with nodes corresponding to users, addons, and words (terms) that describe addon functionality. Analyzing addon interactions at user level using the Personalized PageRank (PPR) random walk measure shows that the graph demonstrates ecological resilience. Adapting the PPR model to analyzing the browser ecosystem at the level of addon manufacturer, the study shows that some addon companies are in symbiosis and others clash with each other as shown by analyzing the behavior of 18 prominent addon manufacturers. Results may herald insight on how other evolving internet ecosystems may behave, and suggest a methodology for measuring this behavior. Specifically, applying such a methodology could transform the addon market. PMID:28644833

  6. Robust head pose estimation via supervised manifold learning.

    PubMed

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Transforming phylogenetic networks: Moving beyond tree space.

    PubMed

    Huber, Katharina T; Moulton, Vincent; Wu, Taoyang

    2016-09-07

    Phylogenetic networks are a generalization of phylogenetic trees that are used to represent reticulate evolution. Unrooted phylogenetic networks form a special class of such networks, which naturally generalize unrooted phylogenetic trees. In this paper we define two operations on unrooted phylogenetic networks, one of which is a generalization of the well-known nearest-neighbor interchange (NNI) operation on phylogenetic trees. We show that any unrooted phylogenetic network can be transformed into any other such network using only these operations. This generalizes the well-known fact that any phylogenetic tree can be transformed into any other such tree using only NNI operations. It also allows us to define a generalization of tree space and to define some new metrics on unrooted phylogenetic networks. To prove our main results, we employ some fascinating new connections between phylogenetic networks and cubic graphs that we have recently discovered. Our results should be useful in developing new strategies to search for optimal phylogenetic networks, a topic that has recently generated some interest in the literature, as well as for providing new ways to compare networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Characterizing Containment and Related Classes of Graphs,

    DTIC Science & Technology

    1985-01-01

    Math . to appear. [G2] Golumbic,. Martin C., D. Rotem and J. Urrutia. "Comparability graphs and intersection graphs" Discrete Math . 43 (1983) 37-40. [G3...intersection classes of graphs" Discrete Math . to appear. [S2] Scheinerman, Edward R. Intersection Classes and Multiple Intersection Parameters of Graphs...graphs and of interval graphs" Canad. Jour. of blath. 16 (1964) 539-548. [G1] Golumbic, Martin C. "Containment graphs: and. intersection graphs" Discrete

  9. A Collection of Features for Semantic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliassi-Rad, T; Fodor, I K; Gallagher, B

    2007-05-02

    Semantic graphs are commonly used to represent data from one or more data sources. Such graphs extend traditional graphs by imposing types on both nodes and links. This type information defines permissible links among specified nodes and can be represented as a graph commonly referred to as an ontology or schema graph. Figure 1 depicts an ontology graph for data from National Association of Securities Dealers. Each node type and link type may also have a list of attributes. To capture the increased complexity of semantic graphs, concepts derived for standard graphs have to be extended. This document explains brieflymore » features commonly used to characterize graphs, and their extensions to semantic graphs. This document is divided into two sections. Section 2 contains the feature descriptions for static graphs. Section 3 extends the features for semantic graphs that vary over time.« less

  10. Graphing the order of the sexes: constructing, recalling, interpreting, and putting the self in gender difference graphs.

    PubMed

    Hegarty, Peter; Lemieux, Anthony F; McQueen, Grant

    2010-03-01

    Graphs seem to connote facts more than words or tables do. Consequently, they seem unlikely places to spot implicit sexism at work. Yet, in 6 studies (N = 741), women and men constructed (Study 1) and recalled (Study 2) gender difference graphs with men's data first, and graphed powerful groups (Study 3) and individuals (Study 4) ahead of weaker ones. Participants who interpreted graph order as evidence of author "bias" inferred that the author graphed his or her own gender group first (Study 5). Women's, but not men's, preferences to graph men first were mitigated when participants graphed a difference between themselves and an opposite-sex friend prior to graphing gender differences (Study 6). Graph production and comprehension are affected by beliefs and suppositions about the groups represented in graphs to a greater degree than cognitive models of graph comprehension or realist models of scientific thinking have yet acknowledged.

  11. Rewrite Systems, Pattern Matching, and Code Generation

    DTIC Science & Technology

    1988-06-09

    Transformations Quicn a bucn arbol se anima, buena sombra le cobija1 [Old Spanish Saying] 1 Trees arc hierarchical mathematical objects. Their...subtrees of a tree may m atch one or more rewrite rules. Traditional research in term rewrite systems is concerned with de termining if a given system...be simulated by sets of rewrite rules. Non-local condjtions are des cribed in an awkward way since the only way to transmit information is indirectly

  12. Combinatorics of transformations from standard to non-standard bases in Brauer algebras

    NASA Astrophysics Data System (ADS)

    Chilla, Vincenzo

    2007-05-01

    Transformation coefficients between standard bases for irreducible representations of the Brauer centralizer algebra \\mathfrak{B}_f(x) and split bases adapted to the \\mathfrak{B}_{f_1} (x) \\times \\mathfrak{B}_{f_2} (x) \\subset \\mathfrak{B}_f (x) subalgebra (f1 + f2 = f) are considered. After providing the suitable combinatorial background, based on the definition of the i-coupling relation on nodes of the subduction grid, we introduce a generalized version of the subduction graph which extends the one given in Chilla (2006 J. Phys. A: Math. Gen. 39 7657) for symmetric groups. Thus, we can describe the structure of the subduction system arising from the linear method and give an outline of the form of the solution space. An ordering relation on the grid is also given and then, as in the case of symmetric groups, the choices of the phases and of the free factors governing the multiplicity separations are discussed.

  13. Influence of magnetic field on chemically reactive blood flow through stenosed bifurcated arteries

    NASA Astrophysics Data System (ADS)

    Hossain, Khan Enaet; Haque, Md. Mohidul

    2017-06-01

    Dynamic response of mass transfer in chemically reactive blood flow through bifurcated arteries under the stenotic condition is numerically studied in the present of a uniform magnetic field. The blood flowing through the artery is assumed an incompressible, fully developed and Newtonian. The nonlinear unsteady flow phenomena are governed by the Navier-Stokes and concentration equations. All these equations together with the appropriate boundary conditions describing the present biomechanical problem are transformed by using a radial transformation and the numerical results are obtained using a finite difference technique. Effects of stenosed bifurcation and externally applied magnetic field on the blood flow with chemical reaction are discussed with the help of graph. All the flow characteristics are found to be affected by the presence of chemical reaction and exposure of magnetic field of different intensities. Finally some important findings of the problem are concluded in this work.

  14. Equilibrium statistical mechanics on correlated random graphs

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Agliari, Elena

    2011-02-01

    Biological and social networks have recently attracted great attention from physicists. Among several aspects, two main ones may be stressed: a non-trivial topology of the graph describing the mutual interactions between agents and, typically, imitative, weighted, interactions. Despite such aspects being widely accepted and empirically confirmed, the schemes currently exploited in order to generate the expected topology are based on a priori assumptions and, in most cases, implement constant intensities for links. Here we propose a simple shift [-1,+1]\\to [0,+1] in the definition of patterns in a Hopfield model: a straightforward effect is the conversion of frustration into dilution. In fact, we show that by varying the bias of pattern distribution, the network topology (generated by the reciprocal affinities among agents, i.e. the Hebbian rule) crosses various well-known regimes, ranging from fully connected, to an extreme dilution scenario, then to completely disconnected. These features, as well as small-world properties, are, in this context, emergent and no longer imposed a priori. The model is throughout investigated also from a thermodynamics perspective: the Ising model defined on the resulting graph is analytically solved (at a replica symmetric level) by extending the double stochastic stability technique, and presented together with its fluctuation theory for a picture of criticality. Overall, our findings show that, at least at equilibrium, dilution (of whatever kind) simply decreases the strength of the coupling felt by the spins, but leaves the paramagnetic/ferromagnetic flavors unchanged. The main difference with respect to previous investigations is that, within our approach, replicas do not appear: instead of (multi)-overlaps as order parameters, we introduce a class of magnetizations on all the possible subgraphs belonging to the main one investigated: as a consequence, for these objects a closure for a self-consistent relation is achieved.

  15. Integrating Semantic Information in Metadata Descriptions for a Geoscience-wide Resource Inventory.

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Gupta, A.; Valentine, D.; Whitenack, T.; Ozyurt, I. B.; Grethe, J. S.; Schachne, A.

    2016-12-01

    Integrating semantic information into legacy metadata catalogs is a challenging issue and so far has been mostly done on a limited scale. We present experience of CINERGI (Community Inventory of Earthcube Resources for Geoscience Interoperability), an NSF Earthcube Building Block project, in creating a large cross-disciplinary catalog of geoscience information resources to enable cross-domain discovery. The project developed a pipeline for automatically augmenting resource metadata, in particular generating keywords that describe metadata documents harvested from multiple geoscience information repositories or contributed by geoscientists through various channels including surveys and domain resource inventories. The pipeline examines available metadata descriptions using text parsing, vocabulary management and semantic annotation and graph navigation services of GeoSciGraph. GeoSciGraph, in turn, relies on a large cross-domain ontology of geoscience terms, which bridges several independently developed ontologies or taxonomies including SWEET, ENVO, YAGO, GeoSciML, GCMD, SWO, and CHEBI. The ontology content enables automatic extraction of keywords reflecting science domains, equipment used, geospatial features, measured properties, methods, processes, etc. We specifically focus on issues of cross-domain geoscience ontology creation, resolving several types of semantic conflicts among component ontologies or vocabularies, and constructing and managing facets for improved data discovery and navigation. The ontology and keyword generation rules are iteratively improved as pipeline results are presented to data managers for selective manual curation via a CINERGI Annotator user interface. We present lessons learned from applying CINERGI metadata augmentation pipeline to a number of federal agency and academic data registries, in the context of several use cases that require data discovery and integration across multiple earth science data catalogs of varying quality and completeness. The inventory is accessible at http://cinergi.sdsc.edu, and the CINERGI project web page is http://earthcube.org/group/cinergi

  16. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  17. Enhancing the Functional Content of Eukaryotic Protein Interaction Networks

    PubMed Central

    Pandey, Gaurav; Arora, Sonali; Manocha, Sahil; Whalen, Sean

    2014-01-01

    Protein interaction networks are a promising type of data for studying complex biological systems. However, despite the rich information embedded in these networks, these networks face important data quality challenges of noise and incompleteness that adversely affect the results obtained from their analysis. Here, we apply a robust measure of local network structure called common neighborhood similarity (CNS) to address these challenges. Although several CNS measures have been proposed in the literature, an understanding of their relative efficacies for the analysis of interaction networks has been lacking. We follow the framework of graph transformation to convert the given interaction network into a transformed network corresponding to a variety of CNS measures evaluated. The effectiveness of each measure is then estimated by comparing the quality of protein function predictions obtained from its corresponding transformed network with those from the original network. Using a large set of human and fly protein interactions, and a set of over GO terms for both, we find that several of the transformed networks produce more accurate predictions than those obtained from the original network. In particular, the measure and other continuous CNS measures perform well this task, especially for large networks. Further investigation reveals that the two major factors contributing to this improvement are the abilities of CNS measures to prune out noisy edges and enhance functional coherence in the transformed networks. PMID:25275489

  18. Graphing with "LogoWriter."

    ERIC Educational Resources Information Center

    Yoder, Sharon K.

    This book discusses four kinds of graphs that are taught in mathematics at the middle school level: pictographs, bar graphs, line graphs, and circle graphs. The chapters on each of these types of graphs contain information such as starting, scaling, drawing, labeling, and finishing the graphs using "LogoWriter." The final chapter of the…

  19. Dynamic airspace configuration algorithms for next generation air transportation system

    NASA Astrophysics Data System (ADS)

    Wei, Jian

    The National Airspace System (NAS) is under great pressure to safely and efficiently handle the record-high air traffic volume nowadays, and will face even greater challenge to keep pace with the steady increase of future air travel demand, since the air travel demand is projected to increase to two to three times the current level by 2025. The inefficiency of traffic flow management initiatives causes severe airspace congestion and frequent flight delays, which cost billions of economic losses every year. To address the increasingly severe airspace congestion and delays, the Next Generation Air Transportation System (NextGen) is proposed to transform the current static and rigid radar based system to a dynamic and flexible satellite based system. New operational concepts such as Dynamic Airspace Configuration (DAC) have been under development to allow more flexibility required to mitigate the demand-capacity imbalances in order to increase the throughput of the entire NAS. In this dissertation, we address the DAC problem in the en route and terminal airspace under the framework of NextGen. We develop a series of algorithms to facilitate the implementation of innovative concepts relevant with DAC in both the en route and terminal airspace. We also develop a performance evaluation framework for comprehensive benefit analyses on different aspects of future sector design algorithms. First, we complete a graph based sectorization algorithm for DAC in the en route airspace, which models the underlying air route network with a weighted graph, converts the sectorization problem into the graph partition problem, partitions the weighted graph with an iterative spectral bipartition method, and constructs the sectors from the partitioned graph. The algorithm uses a graph model to accurately capture the complex traffic patterns of the real flights, and generates sectors with high efficiency while evenly distributing the workload among the generated sectors. We further improve the robustness and efficiency of the graph based DAC algorithm by incorporating the Multilevel Graph Partitioning (MGP) method into the graph model, and develop a MGP based sectorization algorithm for DAC in the en route airspace. In a comprehensive benefit analysis, the performance of the proposed algorithms are tested in numerical simulations with Enhanced Traffic Management System (ETMS) data. Simulation results demonstrate that the algorithmically generated sectorizations outperform the current sectorizations in different sectors for different time periods. Secondly, based on our experience with DAC in the en route airspace, we further study the sectorization problem for DAC in the terminal airspace. The differences between the en route and terminal airspace are identified, and their influence on the terminal sectorization is analyzed. After adjusting the graph model to better capture the unique characteristics of the terminal airspace and the requirements of terminal sectorization, we develop a graph based geometric sectorization algorithm for DAC in the terminal airspace. Moreover, the graph based model is combined with the region based sector design method to better handle the complicated geometric and operational constraints in the terminal sectorization problem. In the benefit analysis, we identify the contributing factors to terminal controller workload, define evaluation metrics, and develop a bebefit analysis framework for terminal sectorization evaluation. With the evaluation framework developed, we demonstrate the improvements on the current sectorizations with real traffic data collected from several major international airports in the U.S., and conduct a detailed analysis on the potential benefits of dynamic reconfiguration in the terminal airspace. Finally, in addition to the research on the macroscopic behavior of a large number of aircraft, we also study the dynamical behavior of individual aircraft from the perspective of traffic flow management. We formulate the mode-confusion problem as hybrid estimation problem, and develop a state estimation algorithm for the linear hybrid system with continuous-state-dependent transitions based on sparse observations. We also develop an estimated time of arrival prediction algorithm based on the state-dependent transition hybrid estimation algorithm, whose performance is demonstrated with simulations on the landing procedure following the Continuous Descend Approach (CDA) profile.

  20. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).

Top